00:00:00.001 Started by upstream project "autotest-per-patch" build number 132133 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.051 The recommended git tool is: git 00:00:00.051 using credential 00000000-0000-0000-0000-000000000002 00:00:00.053 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.078 Fetching changes from the remote Git repository 00:00:00.080 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.110 Using shallow fetch with depth 1 00:00:00.110 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.110 > git --version # timeout=10 00:00:00.163 > git --version # 'git version 2.39.2' 00:00:00.163 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.984 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.995 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.007 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:04.007 > git config core.sparsecheckout # timeout=10 00:00:04.017 > git read-tree -mu HEAD # timeout=10 00:00:04.033 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:04.049 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:04.049 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.154 [Pipeline] Start of Pipeline 00:00:04.167 [Pipeline] library 00:00:04.168 Loading library shm_lib@master 00:00:04.169 Library shm_lib@master is cached. Copying from home. 00:00:04.184 [Pipeline] node 00:00:04.193 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.194 [Pipeline] { 00:00:04.204 [Pipeline] catchError 00:00:04.206 [Pipeline] { 00:00:04.215 [Pipeline] wrap 00:00:04.221 [Pipeline] { 00:00:04.226 [Pipeline] stage 00:00:04.228 [Pipeline] { (Prologue) 00:00:04.438 [Pipeline] sh 00:00:04.717 + logger -p user.info -t JENKINS-CI 00:00:04.732 [Pipeline] echo 00:00:04.733 Node: WFP8 00:00:04.739 [Pipeline] sh 00:00:05.032 [Pipeline] setCustomBuildProperty 00:00:05.045 [Pipeline] echo 00:00:05.047 Cleanup processes 00:00:05.053 [Pipeline] sh 00:00:05.334 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.334 2409730 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.347 [Pipeline] sh 00:00:05.628 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.628 ++ grep -v 'sudo pgrep' 00:00:05.628 ++ awk '{print $1}' 00:00:05.628 + sudo kill -9 00:00:05.628 + true 00:00:05.642 [Pipeline] cleanWs 00:00:05.650 [WS-CLEANUP] Deleting project workspace... 00:00:05.650 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.657 [WS-CLEANUP] done 00:00:05.660 [Pipeline] setCustomBuildProperty 00:00:05.670 [Pipeline] sh 00:00:05.942 + sudo git config --global --replace-all safe.directory '*' 00:00:06.033 [Pipeline] httpRequest 00:00:06.796 [Pipeline] echo 00:00:06.797 Sorcerer 10.211.164.101 is alive 00:00:06.803 [Pipeline] retry 00:00:06.804 [Pipeline] { 00:00:06.813 [Pipeline] httpRequest 00:00:06.816 HttpMethod: GET 00:00:06.817 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.817 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.826 Response Code: HTTP/1.1 200 OK 00:00:06.827 Success: Status code 200 is in the accepted range: 200,404 00:00:06.827 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.641 [Pipeline] } 00:00:12.658 [Pipeline] // retry 00:00:12.665 [Pipeline] sh 00:00:12.948 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.964 [Pipeline] httpRequest 00:00:13.357 [Pipeline] echo 00:00:13.358 Sorcerer 10.211.164.101 is alive 00:00:13.368 [Pipeline] retry 00:00:13.370 [Pipeline] { 00:00:13.384 [Pipeline] httpRequest 00:00:13.388 HttpMethod: GET 00:00:13.388 URL: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:13.389 Sending request to url: http://10.211.164.101/packages/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:00:13.396 Response Code: HTTP/1.1 200 OK 00:00:13.397 Success: Status code 200 is in the accepted range: 200,404 00:00:13.397 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:16.290 [Pipeline] } 00:01:16.307 [Pipeline] // retry 00:01:16.315 [Pipeline] sh 00:01:16.599 + tar --no-same-owner -xf spdk_899af6c35556773d93494c6a94d023acd5b69645.tar.gz 00:01:19.145 [Pipeline] sh 00:01:19.430 + git -C spdk log --oneline -n5 00:01:19.430 899af6c35 lib/nvme: destruct controllers that failed init asynchronously 00:01:19.430 d1c46ed8e lib/rdma_provider: Add API to check if accel seq supported 00:01:19.430 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:01:19.430 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:01:19.430 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:01:19.441 [Pipeline] } 00:01:19.455 [Pipeline] // stage 00:01:19.463 [Pipeline] stage 00:01:19.465 [Pipeline] { (Prepare) 00:01:19.481 [Pipeline] writeFile 00:01:19.497 [Pipeline] sh 00:01:19.780 + logger -p user.info -t JENKINS-CI 00:01:19.792 [Pipeline] sh 00:01:20.075 + logger -p user.info -t JENKINS-CI 00:01:20.089 [Pipeline] sh 00:01:20.375 + cat autorun-spdk.conf 00:01:20.375 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.375 SPDK_TEST_NVMF=1 00:01:20.375 SPDK_TEST_NVME_CLI=1 00:01:20.375 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.375 SPDK_TEST_NVMF_NICS=e810 00:01:20.375 SPDK_TEST_VFIOUSER=1 00:01:20.375 SPDK_RUN_UBSAN=1 00:01:20.375 NET_TYPE=phy 00:01:20.382 RUN_NIGHTLY=0 00:01:20.388 [Pipeline] readFile 00:01:20.414 [Pipeline] withEnv 00:01:20.417 [Pipeline] { 00:01:20.429 [Pipeline] sh 00:01:20.715 + set -ex 00:01:20.715 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:20.715 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:20.715 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.715 ++ SPDK_TEST_NVMF=1 00:01:20.715 ++ SPDK_TEST_NVME_CLI=1 00:01:20.715 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.715 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.715 ++ SPDK_TEST_VFIOUSER=1 00:01:20.715 ++ SPDK_RUN_UBSAN=1 00:01:20.715 ++ NET_TYPE=phy 00:01:20.715 ++ RUN_NIGHTLY=0 00:01:20.715 + case $SPDK_TEST_NVMF_NICS in 00:01:20.715 + DRIVERS=ice 00:01:20.715 + [[ tcp == \r\d\m\a ]] 00:01:20.715 + [[ -n ice ]] 00:01:20.715 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.715 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.715 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.715 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.715 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.715 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.715 + true 00:01:20.715 + for D in $DRIVERS 00:01:20.715 + sudo modprobe ice 00:01:20.715 + exit 0 00:01:20.724 [Pipeline] } 00:01:20.739 [Pipeline] // withEnv 00:01:20.744 [Pipeline] } 00:01:20.757 [Pipeline] // stage 00:01:20.766 [Pipeline] catchError 00:01:20.767 [Pipeline] { 00:01:20.781 [Pipeline] timeout 00:01:20.781 Timeout set to expire in 1 hr 0 min 00:01:20.782 [Pipeline] { 00:01:20.796 [Pipeline] stage 00:01:20.798 [Pipeline] { (Tests) 00:01:20.812 [Pipeline] sh 00:01:21.094 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.094 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.094 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.094 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:21.094 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.094 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.094 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:21.094 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.094 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.094 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.094 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:21.094 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.094 + source /etc/os-release 00:01:21.094 ++ NAME='Fedora Linux' 00:01:21.094 ++ VERSION='39 (Cloud Edition)' 00:01:21.094 ++ ID=fedora 00:01:21.094 ++ VERSION_ID=39 00:01:21.094 ++ VERSION_CODENAME= 00:01:21.094 ++ PLATFORM_ID=platform:f39 00:01:21.094 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.094 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.094 ++ LOGO=fedora-logo-icon 00:01:21.094 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.094 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.094 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.094 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.094 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.094 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.094 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.094 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.094 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.095 ++ SUPPORT_END=2024-11-12 00:01:21.095 ++ VARIANT='Cloud Edition' 00:01:21.095 ++ VARIANT_ID=cloud 00:01:21.095 + uname -a 00:01:21.095 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.095 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.634 Hugepages 00:01:23.634 node hugesize free / total 00:01:23.634 node0 1048576kB 0 / 0 00:01:23.634 node0 2048kB 0 / 0 00:01:23.634 node1 1048576kB 0 / 0 00:01:23.634 node1 2048kB 0 / 0 00:01:23.634 00:01:23.634 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.634 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:23.634 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:23.634 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:23.634 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:23.634 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:23.634 + rm -f /tmp/spdk-ld-path 00:01:23.634 + source autorun-spdk.conf 00:01:23.634 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.634 ++ SPDK_TEST_NVMF=1 00:01:23.634 ++ SPDK_TEST_NVME_CLI=1 00:01:23.634 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.634 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.634 ++ SPDK_TEST_VFIOUSER=1 00:01:23.634 ++ SPDK_RUN_UBSAN=1 00:01:23.634 ++ NET_TYPE=phy 00:01:23.634 ++ RUN_NIGHTLY=0 00:01:23.634 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.634 + [[ -n '' ]] 00:01:23.634 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.634 + for M in /var/spdk/build-*-manifest.txt 00:01:23.634 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.634 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.634 + for M in /var/spdk/build-*-manifest.txt 00:01:23.634 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.634 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.634 + for M in /var/spdk/build-*-manifest.txt 00:01:23.634 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.634 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.634 ++ uname 00:01:23.634 + [[ Linux == \L\i\n\u\x ]] 00:01:23.634 + sudo dmesg -T 00:01:23.634 + sudo dmesg --clear 00:01:23.634 + dmesg_pid=2410656 00:01:23.634 + [[ Fedora Linux == FreeBSD ]] 00:01:23.634 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.634 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.634 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.634 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.634 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.634 + sudo dmesg -Tw 00:01:23.634 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.634 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.634 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.634 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.634 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.634 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.634 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.635 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.635 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.635 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.635 10:29:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:23.635 10:29:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:23.635 10:29:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:23.635 10:29:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.635 10:29:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.894 10:29:51 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:23.894 10:29:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.894 10:29:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.894 10:29:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.894 10:29:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.894 10:29:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.894 10:29:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.894 10:29:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.894 10:29:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.894 10:29:51 -- paths/export.sh@5 -- $ export PATH 00:01:23.894 10:29:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.894 10:29:51 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.894 10:29:51 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:23.894 10:29:51 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730971791.XXXXXX 00:01:23.894 10:29:51 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730971791.6AXq9r 00:01:23.894 10:29:51 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:23.894 10:29:51 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:23.894 10:29:51 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.894 10:29:51 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.894 10:29:51 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.894 10:29:51 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:23.894 10:29:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:23.894 10:29:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.894 10:29:51 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.894 10:29:51 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:23.894 10:29:51 -- pm/common@17 -- $ local monitor 00:01:23.894 10:29:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.895 10:29:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.895 10:29:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.895 10:29:51 -- pm/common@21 -- $ date +%s 00:01:23.895 10:29:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.895 10:29:51 -- pm/common@21 -- $ date +%s 00:01:23.895 10:29:51 -- pm/common@21 -- $ date +%s 00:01:23.895 10:29:51 -- pm/common@25 -- $ sleep 1 00:01:23.895 10:29:51 -- pm/common@21 -- $ date +%s 00:01:23.895 10:29:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971791 00:01:23.895 10:29:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971791 00:01:23.895 10:29:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971791 00:01:23.895 10:29:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730971791 00:01:23.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971791_collect-vmstat.pm.log 00:01:23.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971791_collect-cpu-temp.pm.log 00:01:23.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971791_collect-cpu-load.pm.log 00:01:23.895 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730971791_collect-bmc-pm.bmc.pm.log 00:01:24.831 10:29:52 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:24.831 10:29:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.831 10:29:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.831 10:29:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.831 10:29:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.831 Thu Nov 7 09:29:52 AM UTC 2024 00:01:24.831 10:29:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.831 v25.01-pre-171-g899af6c35 00:01:24.831 10:29:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.831 10:29:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.831 10:29:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.831 10:29:52 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:24.831 10:29:52 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:24.832 10:29:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.832 ************************************ 00:01:24.832 START TEST ubsan 00:01:24.832 ************************************ 00:01:24.832 10:29:52 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:24.832 using ubsan 00:01:24.832 00:01:24.832 real 0m0.000s 00:01:24.832 user 0m0.000s 00:01:24.832 sys 0m0.000s 00:01:24.832 10:29:52 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:24.832 10:29:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.832 ************************************ 00:01:24.832 END TEST ubsan 00:01:24.832 ************************************ 00:01:24.832 10:29:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.832 10:29:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.832 10:29:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.832 10:29:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:25.091 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:25.091 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.349 Using 'verbs' RDMA provider 00:01:38.522 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:50.731 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:50.731 Creating mk/config.mk...done. 00:01:50.731 Creating mk/cc.flags.mk...done. 00:01:50.731 Type 'make' to build. 00:01:50.731 10:30:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:50.731 10:30:17 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.731 10:30:17 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.731 10:30:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.731 ************************************ 00:01:50.731 START TEST make 00:01:50.731 ************************************ 00:01:50.731 10:30:17 make -- common/autotest_common.sh@1127 -- $ make -j96 00:01:50.731 make[1]: Nothing to be done for 'all'. 00:01:51.302 The Meson build system 00:01:51.302 Version: 1.5.0 00:01:51.302 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:51.302 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:51.302 Build type: native build 00:01:51.302 Project name: libvfio-user 00:01:51.302 Project version: 0.0.1 00:01:51.302 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.302 C linker for the host machine: cc ld.bfd 2.40-14 00:01:51.302 Host machine cpu family: x86_64 00:01:51.302 Host machine cpu: x86_64 00:01:51.302 Run-time dependency threads found: YES 00:01:51.302 Library dl found: YES 00:01:51.302 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.302 Run-time dependency json-c found: YES 0.17 00:01:51.302 Run-time dependency cmocka found: YES 1.1.7 00:01:51.302 Program pytest-3 found: NO 00:01:51.302 Program flake8 found: NO 00:01:51.302 Program misspell-fixer found: NO 00:01:51.302 Program restructuredtext-lint found: NO 00:01:51.302 Program valgrind found: YES (/usr/bin/valgrind) 00:01:51.302 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.302 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.302 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.302 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.302 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:51.302 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:51.302 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.302 Build targets in project: 8 00:01:51.302 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:51.302 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:51.302 00:01:51.302 libvfio-user 0.0.1 00:01:51.302 00:01:51.302 User defined options 00:01:51.302 buildtype : debug 00:01:51.302 default_library: shared 00:01:51.302 libdir : /usr/local/lib 00:01:51.302 00:01:51.302 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.869 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.127 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.127 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:52.127 [3/37] Compiling C object samples/null.p/null.c.o 00:01:52.127 [4/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.127 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.127 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:52.127 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.127 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.127 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.127 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:52.127 [11/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.127 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.127 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.127 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.127 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.127 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.127 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.127 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.127 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:52.127 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.127 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.127 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.127 [23/37] Compiling C object samples/server.p/server.c.o 00:01:52.127 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.127 [25/37] Compiling C object samples/client.p/client.c.o 00:01:52.127 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.127 [27/37] Linking target samples/client 00:01:52.127 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.127 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.127 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:52.127 [31/37] Linking target test/unit_tests 00:01:52.385 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:52.385 [33/37] Linking target samples/lspci 00:01:52.385 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:52.385 [35/37] Linking target samples/null 00:01:52.385 [36/37] Linking target samples/gpio-pci-idio-16 00:01:52.385 [37/37] Linking target samples/server 00:01:52.385 INFO: autodetecting backend as ninja 00:01:52.385 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.385 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:52.643 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:52.901 ninja: no work to do. 00:01:58.175 The Meson build system 00:01:58.175 Version: 1.5.0 00:01:58.175 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:58.175 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:58.175 Build type: native build 00:01:58.175 Program cat found: YES (/usr/bin/cat) 00:01:58.175 Project name: DPDK 00:01:58.175 Project version: 24.03.0 00:01:58.175 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.175 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.175 Host machine cpu family: x86_64 00:01:58.175 Host machine cpu: x86_64 00:01:58.175 Message: ## Building in Developer Mode ## 00:01:58.175 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.175 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.175 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.175 Program python3 found: YES (/usr/bin/python3) 00:01:58.175 Program cat found: YES (/usr/bin/cat) 00:01:58.175 Compiler for C supports arguments -march=native: YES 00:01:58.175 Checking for size of "void *" : 8 00:01:58.175 Checking for size of "void *" : 8 (cached) 00:01:58.175 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:58.175 Library m found: YES 00:01:58.175 Library numa found: YES 00:01:58.175 Has header "numaif.h" : YES 00:01:58.175 Library fdt found: NO 00:01:58.175 Library execinfo found: NO 00:01:58.175 Has header "execinfo.h" : YES 00:01:58.175 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.175 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.175 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.175 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.175 Run-time dependency openssl found: YES 3.1.1 00:01:58.175 Run-time dependency libpcap found: YES 1.10.4 00:01:58.175 Has header "pcap.h" with dependency libpcap: YES 00:01:58.175 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.175 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.175 Compiler for C supports arguments -Wformat: YES 00:01:58.175 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.175 Compiler for C supports arguments -Wformat-security: NO 00:01:58.175 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.175 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.175 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.175 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.175 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.175 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.175 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.175 Compiler for C supports arguments -Wundef: YES 00:01:58.175 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.175 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.175 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.175 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.175 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.175 Program objdump found: YES (/usr/bin/objdump) 00:01:58.175 Compiler for C supports arguments -mavx512f: YES 00:01:58.175 Checking if "AVX512 checking" compiles: YES 00:01:58.175 Fetching value of define "__SSE4_2__" : 1 00:01:58.175 Fetching value of define "__AES__" : 1 00:01:58.175 Fetching value of define "__AVX__" : 1 00:01:58.175 Fetching value of define "__AVX2__" : 1 00:01:58.175 Fetching value of define "__AVX512BW__" : 1 00:01:58.175 Fetching value of define "__AVX512CD__" : 1 00:01:58.175 Fetching value of define "__AVX512DQ__" : 1 00:01:58.175 Fetching value of define "__AVX512F__" : 1 00:01:58.175 Fetching value of define "__AVX512VL__" : 1 00:01:58.175 Fetching value of define "__PCLMUL__" : 1 00:01:58.175 Fetching value of define "__RDRND__" : 1 00:01:58.175 Fetching value of define "__RDSEED__" : 1 00:01:58.175 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.175 Fetching value of define "__znver1__" : (undefined) 00:01:58.175 Fetching value of define "__znver2__" : (undefined) 00:01:58.175 Fetching value of define "__znver3__" : (undefined) 00:01:58.175 Fetching value of define "__znver4__" : (undefined) 00:01:58.175 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.175 Message: lib/log: Defining dependency "log" 00:01:58.175 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.175 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.175 Checking for function "getentropy" : NO 00:01:58.175 Message: lib/eal: Defining dependency "eal" 00:01:58.175 Message: lib/ring: Defining dependency "ring" 00:01:58.175 Message: lib/rcu: Defining dependency "rcu" 00:01:58.175 Message: lib/mempool: Defining dependency "mempool" 00:01:58.175 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.175 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.175 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.175 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.175 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.175 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.175 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:58.175 Compiler for C supports arguments -mpclmul: YES 00:01:58.176 Compiler for C supports arguments -maes: YES 00:01:58.176 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.176 Compiler for C supports arguments -mavx512bw: YES 00:01:58.176 Compiler for C supports arguments -mavx512dq: YES 00:01:58.176 Compiler for C supports arguments -mavx512vl: YES 00:01:58.176 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.176 Compiler for C supports arguments -mavx2: YES 00:01:58.176 Compiler for C supports arguments -mavx: YES 00:01:58.176 Message: lib/net: Defining dependency "net" 00:01:58.176 Message: lib/meter: Defining dependency "meter" 00:01:58.176 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.176 Message: lib/pci: Defining dependency "pci" 00:01:58.176 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.176 Message: lib/hash: Defining dependency "hash" 00:01:58.176 Message: lib/timer: Defining dependency "timer" 00:01:58.176 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.176 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.176 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.176 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.176 Message: lib/power: Defining dependency "power" 00:01:58.176 Message: lib/reorder: Defining dependency "reorder" 00:01:58.176 Message: lib/security: Defining dependency "security" 00:01:58.176 Has header "linux/userfaultfd.h" : YES 00:01:58.176 Has header "linux/vduse.h" : YES 00:01:58.176 Message: lib/vhost: Defining dependency "vhost" 00:01:58.176 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.176 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.176 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.176 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.176 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.176 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.176 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.176 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.176 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.176 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.176 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.176 Configuring doxy-api-html.conf using configuration 00:01:58.176 Configuring doxy-api-man.conf using configuration 00:01:58.176 Program mandb found: YES (/usr/bin/mandb) 00:01:58.176 Program sphinx-build found: NO 00:01:58.176 Configuring rte_build_config.h using configuration 00:01:58.176 Message: 00:01:58.176 ================= 00:01:58.176 Applications Enabled 00:01:58.176 ================= 00:01:58.176 00:01:58.176 apps: 00:01:58.176 00:01:58.176 00:01:58.176 Message: 00:01:58.176 ================= 00:01:58.176 Libraries Enabled 00:01:58.176 ================= 00:01:58.176 00:01:58.176 libs: 00:01:58.176 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.176 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.176 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.176 00:01:58.176 Message: 00:01:58.176 =============== 00:01:58.176 Drivers Enabled 00:01:58.176 =============== 00:01:58.176 00:01:58.176 common: 00:01:58.176 00:01:58.176 bus: 00:01:58.176 pci, vdev, 00:01:58.176 mempool: 00:01:58.176 ring, 00:01:58.176 dma: 00:01:58.176 00:01:58.176 net: 00:01:58.176 00:01:58.176 crypto: 00:01:58.176 00:01:58.176 compress: 00:01:58.176 00:01:58.176 vdpa: 00:01:58.176 00:01:58.176 00:01:58.176 Message: 00:01:58.176 ================= 00:01:58.176 Content Skipped 00:01:58.176 ================= 00:01:58.176 00:01:58.176 apps: 00:01:58.176 dumpcap: explicitly disabled via build config 00:01:58.176 graph: explicitly disabled via build config 00:01:58.176 pdump: explicitly disabled via build config 00:01:58.176 proc-info: explicitly disabled via build config 00:01:58.176 test-acl: explicitly disabled via build config 00:01:58.176 test-bbdev: explicitly disabled via build config 00:01:58.176 test-cmdline: explicitly disabled via build config 00:01:58.176 test-compress-perf: explicitly disabled via build config 00:01:58.176 test-crypto-perf: explicitly disabled via build config 00:01:58.176 test-dma-perf: explicitly disabled via build config 00:01:58.176 test-eventdev: explicitly disabled via build config 00:01:58.176 test-fib: explicitly disabled via build config 00:01:58.176 test-flow-perf: explicitly disabled via build config 00:01:58.176 test-gpudev: explicitly disabled via build config 00:01:58.176 test-mldev: explicitly disabled via build config 00:01:58.176 test-pipeline: explicitly disabled via build config 00:01:58.176 test-pmd: explicitly disabled via build config 00:01:58.176 test-regex: explicitly disabled via build config 00:01:58.176 test-sad: explicitly disabled via build config 00:01:58.176 test-security-perf: explicitly disabled via build config 00:01:58.176 00:01:58.176 libs: 00:01:58.176 argparse: explicitly disabled via build config 00:01:58.176 metrics: explicitly disabled via build config 00:01:58.176 acl: explicitly disabled via build config 00:01:58.176 bbdev: explicitly disabled via build config 00:01:58.176 bitratestats: explicitly disabled via build config 00:01:58.176 bpf: explicitly disabled via build config 00:01:58.176 cfgfile: explicitly disabled via build config 00:01:58.176 distributor: explicitly disabled via build config 00:01:58.176 efd: explicitly disabled via build config 00:01:58.176 eventdev: explicitly disabled via build config 00:01:58.176 dispatcher: explicitly disabled via build config 00:01:58.176 gpudev: explicitly disabled via build config 00:01:58.176 gro: explicitly disabled via build config 00:01:58.176 gso: explicitly disabled via build config 00:01:58.176 ip_frag: explicitly disabled via build config 00:01:58.176 jobstats: explicitly disabled via build config 00:01:58.176 latencystats: explicitly disabled via build config 00:01:58.176 lpm: explicitly disabled via build config 00:01:58.176 member: explicitly disabled via build config 00:01:58.176 pcapng: explicitly disabled via build config 00:01:58.176 rawdev: explicitly disabled via build config 00:01:58.176 regexdev: explicitly disabled via build config 00:01:58.176 mldev: explicitly disabled via build config 00:01:58.176 rib: explicitly disabled via build config 00:01:58.176 sched: explicitly disabled via build config 00:01:58.176 stack: explicitly disabled via build config 00:01:58.176 ipsec: explicitly disabled via build config 00:01:58.176 pdcp: explicitly disabled via build config 00:01:58.176 fib: explicitly disabled via build config 00:01:58.176 port: explicitly disabled via build config 00:01:58.176 pdump: explicitly disabled via build config 00:01:58.176 table: explicitly disabled via build config 00:01:58.176 pipeline: explicitly disabled via build config 00:01:58.176 graph: explicitly disabled via build config 00:01:58.176 node: explicitly disabled via build config 00:01:58.176 00:01:58.176 drivers: 00:01:58.176 common/cpt: not in enabled drivers build config 00:01:58.176 common/dpaax: not in enabled drivers build config 00:01:58.176 common/iavf: not in enabled drivers build config 00:01:58.176 common/idpf: not in enabled drivers build config 00:01:58.176 common/ionic: not in enabled drivers build config 00:01:58.176 common/mvep: not in enabled drivers build config 00:01:58.176 common/octeontx: not in enabled drivers build config 00:01:58.176 bus/auxiliary: not in enabled drivers build config 00:01:58.176 bus/cdx: not in enabled drivers build config 00:01:58.176 bus/dpaa: not in enabled drivers build config 00:01:58.176 bus/fslmc: not in enabled drivers build config 00:01:58.176 bus/ifpga: not in enabled drivers build config 00:01:58.176 bus/platform: not in enabled drivers build config 00:01:58.176 bus/uacce: not in enabled drivers build config 00:01:58.176 bus/vmbus: not in enabled drivers build config 00:01:58.176 common/cnxk: not in enabled drivers build config 00:01:58.176 common/mlx5: not in enabled drivers build config 00:01:58.176 common/nfp: not in enabled drivers build config 00:01:58.176 common/nitrox: not in enabled drivers build config 00:01:58.176 common/qat: not in enabled drivers build config 00:01:58.176 common/sfc_efx: not in enabled drivers build config 00:01:58.176 mempool/bucket: not in enabled drivers build config 00:01:58.176 mempool/cnxk: not in enabled drivers build config 00:01:58.177 mempool/dpaa: not in enabled drivers build config 00:01:58.177 mempool/dpaa2: not in enabled drivers build config 00:01:58.177 mempool/octeontx: not in enabled drivers build config 00:01:58.177 mempool/stack: not in enabled drivers build config 00:01:58.177 dma/cnxk: not in enabled drivers build config 00:01:58.177 dma/dpaa: not in enabled drivers build config 00:01:58.177 dma/dpaa2: not in enabled drivers build config 00:01:58.177 dma/hisilicon: not in enabled drivers build config 00:01:58.177 dma/idxd: not in enabled drivers build config 00:01:58.177 dma/ioat: not in enabled drivers build config 00:01:58.177 dma/skeleton: not in enabled drivers build config 00:01:58.177 net/af_packet: not in enabled drivers build config 00:01:58.177 net/af_xdp: not in enabled drivers build config 00:01:58.177 net/ark: not in enabled drivers build config 00:01:58.177 net/atlantic: not in enabled drivers build config 00:01:58.177 net/avp: not in enabled drivers build config 00:01:58.177 net/axgbe: not in enabled drivers build config 00:01:58.177 net/bnx2x: not in enabled drivers build config 00:01:58.177 net/bnxt: not in enabled drivers build config 00:01:58.177 net/bonding: not in enabled drivers build config 00:01:58.177 net/cnxk: not in enabled drivers build config 00:01:58.177 net/cpfl: not in enabled drivers build config 00:01:58.177 net/cxgbe: not in enabled drivers build config 00:01:58.177 net/dpaa: not in enabled drivers build config 00:01:58.177 net/dpaa2: not in enabled drivers build config 00:01:58.177 net/e1000: not in enabled drivers build config 00:01:58.177 net/ena: not in enabled drivers build config 00:01:58.177 net/enetc: not in enabled drivers build config 00:01:58.177 net/enetfec: not in enabled drivers build config 00:01:58.177 net/enic: not in enabled drivers build config 00:01:58.177 net/failsafe: not in enabled drivers build config 00:01:58.177 net/fm10k: not in enabled drivers build config 00:01:58.177 net/gve: not in enabled drivers build config 00:01:58.177 net/hinic: not in enabled drivers build config 00:01:58.177 net/hns3: not in enabled drivers build config 00:01:58.177 net/i40e: not in enabled drivers build config 00:01:58.177 net/iavf: not in enabled drivers build config 00:01:58.177 net/ice: not in enabled drivers build config 00:01:58.177 net/idpf: not in enabled drivers build config 00:01:58.177 net/igc: not in enabled drivers build config 00:01:58.177 net/ionic: not in enabled drivers build config 00:01:58.177 net/ipn3ke: not in enabled drivers build config 00:01:58.177 net/ixgbe: not in enabled drivers build config 00:01:58.177 net/mana: not in enabled drivers build config 00:01:58.177 net/memif: not in enabled drivers build config 00:01:58.177 net/mlx4: not in enabled drivers build config 00:01:58.177 net/mlx5: not in enabled drivers build config 00:01:58.177 net/mvneta: not in enabled drivers build config 00:01:58.177 net/mvpp2: not in enabled drivers build config 00:01:58.177 net/netvsc: not in enabled drivers build config 00:01:58.177 net/nfb: not in enabled drivers build config 00:01:58.177 net/nfp: not in enabled drivers build config 00:01:58.177 net/ngbe: not in enabled drivers build config 00:01:58.177 net/null: not in enabled drivers build config 00:01:58.177 net/octeontx: not in enabled drivers build config 00:01:58.177 net/octeon_ep: not in enabled drivers build config 00:01:58.177 net/pcap: not in enabled drivers build config 00:01:58.177 net/pfe: not in enabled drivers build config 00:01:58.177 net/qede: not in enabled drivers build config 00:01:58.177 net/ring: not in enabled drivers build config 00:01:58.177 net/sfc: not in enabled drivers build config 00:01:58.177 net/softnic: not in enabled drivers build config 00:01:58.177 net/tap: not in enabled drivers build config 00:01:58.177 net/thunderx: not in enabled drivers build config 00:01:58.177 net/txgbe: not in enabled drivers build config 00:01:58.177 net/vdev_netvsc: not in enabled drivers build config 00:01:58.177 net/vhost: not in enabled drivers build config 00:01:58.177 net/virtio: not in enabled drivers build config 00:01:58.177 net/vmxnet3: not in enabled drivers build config 00:01:58.177 raw/*: missing internal dependency, "rawdev" 00:01:58.177 crypto/armv8: not in enabled drivers build config 00:01:58.177 crypto/bcmfs: not in enabled drivers build config 00:01:58.177 crypto/caam_jr: not in enabled drivers build config 00:01:58.177 crypto/ccp: not in enabled drivers build config 00:01:58.177 crypto/cnxk: not in enabled drivers build config 00:01:58.177 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.177 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.177 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.177 crypto/mlx5: not in enabled drivers build config 00:01:58.177 crypto/mvsam: not in enabled drivers build config 00:01:58.177 crypto/nitrox: not in enabled drivers build config 00:01:58.177 crypto/null: not in enabled drivers build config 00:01:58.177 crypto/octeontx: not in enabled drivers build config 00:01:58.177 crypto/openssl: not in enabled drivers build config 00:01:58.177 crypto/scheduler: not in enabled drivers build config 00:01:58.177 crypto/uadk: not in enabled drivers build config 00:01:58.177 crypto/virtio: not in enabled drivers build config 00:01:58.177 compress/isal: not in enabled drivers build config 00:01:58.177 compress/mlx5: not in enabled drivers build config 00:01:58.177 compress/nitrox: not in enabled drivers build config 00:01:58.177 compress/octeontx: not in enabled drivers build config 00:01:58.177 compress/zlib: not in enabled drivers build config 00:01:58.177 regex/*: missing internal dependency, "regexdev" 00:01:58.177 ml/*: missing internal dependency, "mldev" 00:01:58.177 vdpa/ifc: not in enabled drivers build config 00:01:58.177 vdpa/mlx5: not in enabled drivers build config 00:01:58.177 vdpa/nfp: not in enabled drivers build config 00:01:58.177 vdpa/sfc: not in enabled drivers build config 00:01:58.177 event/*: missing internal dependency, "eventdev" 00:01:58.177 baseband/*: missing internal dependency, "bbdev" 00:01:58.177 gpu/*: missing internal dependency, "gpudev" 00:01:58.177 00:01:58.177 00:01:58.177 Build targets in project: 85 00:01:58.177 00:01:58.177 DPDK 24.03.0 00:01:58.177 00:01:58.177 User defined options 00:01:58.177 buildtype : debug 00:01:58.177 default_library : shared 00:01:58.177 libdir : lib 00:01:58.177 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.177 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.177 c_link_args : 00:01:58.177 cpu_instruction_set: native 00:01:58.177 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:58.177 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:58.177 enable_docs : false 00:01:58.177 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:58.177 enable_kmods : false 00:01:58.177 max_lcores : 128 00:01:58.177 tests : false 00:01:58.177 00:01:58.177 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.177 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.443 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.443 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.443 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.443 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.443 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.443 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.443 [7/268] Linking static target lib/librte_kvargs.a 00:01:58.443 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:58.443 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.443 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.443 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.443 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.443 [13/268] Linking static target lib/librte_log.a 00:01:58.443 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.443 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:58.443 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.443 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.443 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.702 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:58.702 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:58.702 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:58.702 [22/268] Linking static target lib/librte_pci.a 00:01:58.702 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:58.702 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:58.702 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:58.961 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:58.961 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.961 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.961 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:58.961 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:58.961 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:58.961 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:58.961 [33/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:58.961 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.961 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:58.961 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:58.961 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:58.961 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:58.961 [39/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:58.961 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:58.961 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:58.961 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:58.961 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.961 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.961 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:58.961 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:58.962 [47/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.962 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:58.962 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:58.962 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:58.962 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.962 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:58.962 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:58.962 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:58.962 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:58.962 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:58.962 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:58.962 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.962 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:58.962 [60/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:58.962 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:58.962 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:58.962 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.962 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:58.962 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:58.962 [66/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.962 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:58.962 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:58.962 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:58.962 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.962 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:58.962 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:58.962 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:58.962 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:58.962 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.962 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.962 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:58.962 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.962 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:58.962 [80/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.962 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:58.962 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:58.962 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.962 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:58.962 [85/268] Linking static target lib/librte_meter.a 00:01:58.962 [86/268] Linking static target lib/librte_ring.a 00:01:58.962 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:58.962 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:58.962 [89/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.962 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:58.962 [91/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:58.962 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:58.962 [93/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:58.962 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:58.962 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.962 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.962 [97/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:58.962 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:58.962 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:58.962 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.962 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:58.962 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.962 [103/268] Linking static target lib/librte_telemetry.a 00:01:58.962 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.220 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:59.220 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:59.220 [107/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:59.220 [108/268] Linking static target lib/librte_mempool.a 00:01:59.220 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.220 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:59.220 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.220 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.220 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:59.220 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.220 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.220 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.220 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.220 [118/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.220 [119/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:59.220 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.220 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:59.220 [122/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:59.220 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:59.220 [124/268] Linking static target lib/librte_net.a 00:01:59.220 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.220 [126/268] Linking static target lib/librte_eal.a 00:01:59.220 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.220 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:59.220 [129/268] Linking static target lib/librte_cmdline.a 00:01:59.220 [130/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:59.220 [131/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:59.220 [132/268] Linking static target lib/librte_mbuf.a 00:01:59.220 [133/268] Linking static target lib/librte_rcu.a 00:01:59.220 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:59.220 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.220 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:59.220 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.220 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.220 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:59.220 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.220 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:59.220 [142/268] Linking target lib/librte_log.so.24.1 00:01:59.220 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.220 [144/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:59.479 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:59.479 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:59.479 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.479 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.479 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:59.479 [150/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:59.479 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:59.479 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:59.479 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:59.479 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:59.479 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:59.479 [156/268] Linking static target lib/librte_dmadev.a 00:01:59.479 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:59.479 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:59.479 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.479 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:59.479 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:59.479 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:59.479 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.479 [164/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:59.479 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:59.479 [166/268] Linking static target lib/librte_timer.a 00:01:59.479 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:59.479 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:59.479 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:59.479 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:59.479 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.479 [172/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:59.479 [173/268] Linking target lib/librte_kvargs.so.24.1 00:01:59.479 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:59.479 [175/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.479 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:59.479 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:59.479 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.479 [179/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.479 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.479 [181/268] Linking target lib/librte_telemetry.so.24.1 00:01:59.479 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:59.479 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:59.479 [184/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.479 [185/268] Linking static target lib/librte_security.a 00:01:59.479 [186/268] Linking static target lib/librte_power.a 00:01:59.479 [187/268] Linking static target lib/librte_compressdev.a 00:01:59.479 [188/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:59.479 [189/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.479 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:59.479 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.740 [192/268] Linking static target lib/librte_hash.a 00:01:59.740 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.740 [194/268] Linking static target drivers/librte_bus_vdev.a 00:01:59.740 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.740 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:59.740 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:59.740 [198/268] Linking static target lib/librte_reorder.a 00:01:59.740 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:59.740 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:59.740 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.740 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.740 [203/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.740 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.740 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.740 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:59.740 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:59.740 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:59.740 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.740 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.740 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.999 [215/268] Linking static target lib/librte_cryptodev.a 00:01:59.999 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.999 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.258 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.258 [221/268] Linking static target lib/librte_ethdev.a 00:02:00.258 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.258 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.258 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:00.516 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.516 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.516 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.453 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.453 [229/268] Linking static target lib/librte_vhost.a 00:02:01.712 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.089 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.359 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.618 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.876 [234/268] Linking target lib/librte_eal.so.24.1 00:02:08.876 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:08.876 [236/268] Linking target lib/librte_ring.so.24.1 00:02:08.876 [237/268] Linking target lib/librte_timer.so.24.1 00:02:08.876 [238/268] Linking target lib/librte_meter.so.24.1 00:02:08.876 [239/268] Linking target lib/librte_pci.so.24.1 00:02:08.876 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:08.876 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:09.136 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:09.136 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:09.136 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:09.136 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:09.136 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:09.136 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:09.136 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:09.136 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:09.136 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:09.136 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:09.136 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:09.136 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:09.394 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:09.394 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:09.394 [256/268] Linking target lib/librte_net.so.24.1 00:02:09.394 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:09.394 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:09.653 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:09.653 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:09.653 [261/268] Linking target lib/librte_hash.so.24.1 00:02:09.653 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:09.653 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:09.653 [264/268] Linking target lib/librte_security.so.24.1 00:02:09.653 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:09.653 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:09.912 [267/268] Linking target lib/librte_power.so.24.1 00:02:09.912 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:09.912 INFO: autodetecting backend as ninja 00:02:09.912 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:22.237 CC lib/ut/ut.o 00:02:22.237 CC lib/log/log.o 00:02:22.237 CC lib/log/log_flags.o 00:02:22.237 CC lib/log/log_deprecated.o 00:02:22.237 CC lib/ut_mock/mock.o 00:02:22.237 LIB libspdk_ut.a 00:02:22.237 SO libspdk_ut.so.2.0 00:02:22.237 LIB libspdk_log.a 00:02:22.237 LIB libspdk_ut_mock.a 00:02:22.237 SYMLINK libspdk_ut.so 00:02:22.237 SO libspdk_log.so.7.1 00:02:22.237 SO libspdk_ut_mock.so.6.0 00:02:22.237 SYMLINK libspdk_log.so 00:02:22.237 SYMLINK libspdk_ut_mock.so 00:02:22.237 CC lib/dma/dma.o 00:02:22.237 CXX lib/trace_parser/trace.o 00:02:22.237 CC lib/ioat/ioat.o 00:02:22.237 CC lib/util/base64.o 00:02:22.237 CC lib/util/bit_array.o 00:02:22.237 CC lib/util/cpuset.o 00:02:22.237 CC lib/util/crc16.o 00:02:22.237 CC lib/util/crc32.o 00:02:22.237 CC lib/util/crc32c.o 00:02:22.237 CC lib/util/crc32_ieee.o 00:02:22.237 CC lib/util/dif.o 00:02:22.237 CC lib/util/crc64.o 00:02:22.237 CC lib/util/fd.o 00:02:22.237 CC lib/util/fd_group.o 00:02:22.237 CC lib/util/file.o 00:02:22.237 CC lib/util/hexlify.o 00:02:22.237 CC lib/util/iov.o 00:02:22.237 CC lib/util/math.o 00:02:22.237 CC lib/util/net.o 00:02:22.237 CC lib/util/pipe.o 00:02:22.237 CC lib/util/strerror_tls.o 00:02:22.237 CC lib/util/string.o 00:02:22.237 CC lib/util/uuid.o 00:02:22.237 CC lib/util/xor.o 00:02:22.237 CC lib/util/zipf.o 00:02:22.237 CC lib/util/md5.o 00:02:22.237 LIB libspdk_dma.a 00:02:22.237 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.237 CC lib/vfio_user/host/vfio_user.o 00:02:22.237 SO libspdk_dma.so.5.0 00:02:22.237 SYMLINK libspdk_dma.so 00:02:22.237 LIB libspdk_ioat.a 00:02:22.237 SO libspdk_ioat.so.7.0 00:02:22.237 SYMLINK libspdk_ioat.so 00:02:22.237 LIB libspdk_vfio_user.a 00:02:22.237 SO libspdk_vfio_user.so.5.0 00:02:22.237 LIB libspdk_util.a 00:02:22.237 SYMLINK libspdk_vfio_user.so 00:02:22.237 SO libspdk_util.so.10.1 00:02:22.237 SYMLINK libspdk_util.so 00:02:22.496 LIB libspdk_trace_parser.a 00:02:22.496 SO libspdk_trace_parser.so.6.0 00:02:22.496 SYMLINK libspdk_trace_parser.so 00:02:22.756 CC lib/vmd/vmd.o 00:02:22.756 CC lib/vmd/led.o 00:02:22.756 CC lib/json/json_parse.o 00:02:22.756 CC lib/json/json_write.o 00:02:22.756 CC lib/json/json_util.o 00:02:22.756 CC lib/env_dpdk/env.o 00:02:22.756 CC lib/env_dpdk/memory.o 00:02:22.756 CC lib/rdma_utils/rdma_utils.o 00:02:22.756 CC lib/env_dpdk/pci.o 00:02:22.756 CC lib/idxd/idxd_user.o 00:02:22.756 CC lib/env_dpdk/init.o 00:02:22.756 CC lib/idxd/idxd.o 00:02:22.756 CC lib/env_dpdk/threads.o 00:02:22.756 CC lib/idxd/idxd_kernel.o 00:02:22.756 CC lib/env_dpdk/pci_ioat.o 00:02:22.756 CC lib/env_dpdk/pci_virtio.o 00:02:22.756 CC lib/env_dpdk/pci_idxd.o 00:02:22.756 CC lib/env_dpdk/pci_vmd.o 00:02:22.756 CC lib/env_dpdk/pci_event.o 00:02:22.756 CC lib/env_dpdk/sigbus_handler.o 00:02:22.756 CC lib/conf/conf.o 00:02:22.756 CC lib/env_dpdk/pci_dpdk.o 00:02:22.756 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.756 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.015 LIB libspdk_conf.a 00:02:23.015 SO libspdk_conf.so.6.0 00:02:23.015 LIB libspdk_rdma_utils.a 00:02:23.015 LIB libspdk_json.a 00:02:23.015 SO libspdk_rdma_utils.so.1.0 00:02:23.015 SYMLINK libspdk_conf.so 00:02:23.015 SO libspdk_json.so.6.0 00:02:23.015 SYMLINK libspdk_rdma_utils.so 00:02:23.015 SYMLINK libspdk_json.so 00:02:23.274 LIB libspdk_idxd.a 00:02:23.274 LIB libspdk_vmd.a 00:02:23.274 SO libspdk_idxd.so.12.1 00:02:23.274 SO libspdk_vmd.so.6.0 00:02:23.274 SYMLINK libspdk_vmd.so 00:02:23.274 SYMLINK libspdk_idxd.so 00:02:23.274 CC lib/rdma_provider/common.o 00:02:23.274 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.274 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.274 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.274 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.274 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.532 LIB libspdk_rdma_provider.a 00:02:23.532 SO libspdk_rdma_provider.so.7.0 00:02:23.532 LIB libspdk_jsonrpc.a 00:02:23.532 SYMLINK libspdk_rdma_provider.so 00:02:23.532 SO libspdk_jsonrpc.so.6.0 00:02:23.532 SYMLINK libspdk_jsonrpc.so 00:02:23.791 LIB libspdk_env_dpdk.a 00:02:23.791 SO libspdk_env_dpdk.so.15.1 00:02:23.791 SYMLINK libspdk_env_dpdk.so 00:02:23.791 CC lib/rpc/rpc.o 00:02:24.051 LIB libspdk_rpc.a 00:02:24.051 SO libspdk_rpc.so.6.0 00:02:24.051 SYMLINK libspdk_rpc.so 00:02:24.311 CC lib/trace/trace.o 00:02:24.311 CC lib/trace/trace_flags.o 00:02:24.311 CC lib/trace/trace_rpc.o 00:02:24.311 CC lib/notify/notify.o 00:02:24.311 CC lib/notify/notify_rpc.o 00:02:24.570 CC lib/keyring/keyring.o 00:02:24.570 CC lib/keyring/keyring_rpc.o 00:02:24.570 LIB libspdk_notify.a 00:02:24.570 SO libspdk_notify.so.6.0 00:02:24.570 LIB libspdk_trace.a 00:02:24.570 SO libspdk_trace.so.11.0 00:02:24.570 SYMLINK libspdk_notify.so 00:02:24.571 LIB libspdk_keyring.a 00:02:24.829 SO libspdk_keyring.so.2.0 00:02:24.829 SYMLINK libspdk_trace.so 00:02:24.829 SYMLINK libspdk_keyring.so 00:02:25.089 CC lib/thread/thread.o 00:02:25.089 CC lib/thread/iobuf.o 00:02:25.089 CC lib/sock/sock.o 00:02:25.089 CC lib/sock/sock_rpc.o 00:02:25.348 LIB libspdk_sock.a 00:02:25.348 SO libspdk_sock.so.10.0 00:02:25.348 SYMLINK libspdk_sock.so 00:02:25.608 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.608 CC lib/nvme/nvme_ctrlr.o 00:02:25.608 CC lib/nvme/nvme_ns_cmd.o 00:02:25.608 CC lib/nvme/nvme_fabric.o 00:02:25.608 CC lib/nvme/nvme_ns.o 00:02:25.608 CC lib/nvme/nvme_pcie_common.o 00:02:25.608 CC lib/nvme/nvme_pcie.o 00:02:25.608 CC lib/nvme/nvme_qpair.o 00:02:25.608 CC lib/nvme/nvme.o 00:02:25.608 CC lib/nvme/nvme_discovery.o 00:02:25.608 CC lib/nvme/nvme_quirks.o 00:02:25.608 CC lib/nvme/nvme_transport.o 00:02:25.608 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.608 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.608 CC lib/nvme/nvme_tcp.o 00:02:25.608 CC lib/nvme/nvme_opal.o 00:02:25.608 CC lib/nvme/nvme_io_msg.o 00:02:25.608 CC lib/nvme/nvme_poll_group.o 00:02:25.608 CC lib/nvme/nvme_zns.o 00:02:25.608 CC lib/nvme/nvme_stubs.o 00:02:25.608 CC lib/nvme/nvme_auth.o 00:02:25.608 CC lib/nvme/nvme_cuse.o 00:02:25.608 CC lib/nvme/nvme_rdma.o 00:02:25.608 CC lib/nvme/nvme_vfio_user.o 00:02:26.175 LIB libspdk_thread.a 00:02:26.175 SO libspdk_thread.so.11.0 00:02:26.175 SYMLINK libspdk_thread.so 00:02:26.434 CC lib/virtio/virtio.o 00:02:26.434 CC lib/virtio/virtio_vfio_user.o 00:02:26.434 CC lib/virtio/virtio_vhost_user.o 00:02:26.434 CC lib/init/json_config.o 00:02:26.434 CC lib/virtio/virtio_pci.o 00:02:26.434 CC lib/init/subsystem.o 00:02:26.434 CC lib/init/subsystem_rpc.o 00:02:26.434 CC lib/init/rpc.o 00:02:26.434 CC lib/blob/zeroes.o 00:02:26.434 CC lib/blob/blobstore.o 00:02:26.434 CC lib/blob/request.o 00:02:26.434 CC lib/blob/blob_bs_dev.o 00:02:26.434 CC lib/fsdev/fsdev.o 00:02:26.434 CC lib/fsdev/fsdev_io.o 00:02:26.434 CC lib/fsdev/fsdev_rpc.o 00:02:26.434 CC lib/vfu_tgt/tgt_rpc.o 00:02:26.434 CC lib/vfu_tgt/tgt_endpoint.o 00:02:26.434 CC lib/accel/accel.o 00:02:26.434 CC lib/accel/accel_rpc.o 00:02:26.434 CC lib/accel/accel_sw.o 00:02:26.693 LIB libspdk_init.a 00:02:26.693 SO libspdk_init.so.6.0 00:02:26.693 LIB libspdk_virtio.a 00:02:26.693 SYMLINK libspdk_init.so 00:02:26.693 LIB libspdk_vfu_tgt.a 00:02:26.693 SO libspdk_virtio.so.7.0 00:02:26.693 SO libspdk_vfu_tgt.so.3.0 00:02:26.952 SYMLINK libspdk_virtio.so 00:02:26.952 SYMLINK libspdk_vfu_tgt.so 00:02:26.952 LIB libspdk_fsdev.a 00:02:26.952 SO libspdk_fsdev.so.2.0 00:02:26.952 CC lib/event/app_rpc.o 00:02:26.952 CC lib/event/app.o 00:02:26.952 CC lib/event/reactor.o 00:02:26.952 CC lib/event/log_rpc.o 00:02:26.952 CC lib/event/scheduler_static.o 00:02:27.211 SYMLINK libspdk_fsdev.so 00:02:27.211 LIB libspdk_accel.a 00:02:27.471 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:27.471 SO libspdk_accel.so.16.0 00:02:27.471 LIB libspdk_nvme.a 00:02:27.471 LIB libspdk_event.a 00:02:27.471 SO libspdk_event.so.14.0 00:02:27.471 SYMLINK libspdk_accel.so 00:02:27.471 SO libspdk_nvme.so.15.0 00:02:27.471 SYMLINK libspdk_event.so 00:02:27.730 SYMLINK libspdk_nvme.so 00:02:27.730 CC lib/bdev/bdev_rpc.o 00:02:27.730 CC lib/bdev/bdev.o 00:02:27.730 CC lib/bdev/bdev_zone.o 00:02:27.730 CC lib/bdev/part.o 00:02:27.730 CC lib/bdev/scsi_nvme.o 00:02:27.730 LIB libspdk_fuse_dispatcher.a 00:02:27.989 SO libspdk_fuse_dispatcher.so.1.0 00:02:27.989 SYMLINK libspdk_fuse_dispatcher.so 00:02:28.557 LIB libspdk_blob.a 00:02:28.557 SO libspdk_blob.so.11.0 00:02:28.816 SYMLINK libspdk_blob.so 00:02:29.074 CC lib/blobfs/blobfs.o 00:02:29.074 CC lib/blobfs/tree.o 00:02:29.074 CC lib/lvol/lvol.o 00:02:29.643 LIB libspdk_bdev.a 00:02:29.643 LIB libspdk_blobfs.a 00:02:29.643 SO libspdk_bdev.so.17.0 00:02:29.643 SO libspdk_blobfs.so.10.0 00:02:29.643 SYMLINK libspdk_bdev.so 00:02:29.643 SYMLINK libspdk_blobfs.so 00:02:29.643 LIB libspdk_lvol.a 00:02:29.643 SO libspdk_lvol.so.10.0 00:02:29.904 SYMLINK libspdk_lvol.so 00:02:29.904 CC lib/ftl/ftl_layout.o 00:02:29.904 CC lib/ftl/ftl_core.o 00:02:29.904 CC lib/ftl/ftl_init.o 00:02:29.904 CC lib/ftl/ftl_debug.o 00:02:29.904 CC lib/ftl/ftl_sb.o 00:02:29.904 CC lib/ftl/ftl_io.o 00:02:29.904 CC lib/ftl/ftl_l2p.o 00:02:29.904 CC lib/ftl/ftl_nv_cache.o 00:02:29.904 CC lib/ftl/ftl_band_ops.o 00:02:29.904 CC lib/ftl/ftl_l2p_flat.o 00:02:29.904 CC lib/ftl/ftl_band.o 00:02:29.904 CC lib/ftl/ftl_rq.o 00:02:29.904 CC lib/ftl/ftl_writer.o 00:02:29.904 CC lib/nvmf/ctrlr.o 00:02:29.904 CC lib/ftl/ftl_p2l.o 00:02:29.904 CC lib/ftl/ftl_reloc.o 00:02:29.904 CC lib/ftl/ftl_l2p_cache.o 00:02:29.904 CC lib/nvmf/ctrlr_discovery.o 00:02:29.904 CC lib/nvmf/ctrlr_bdev.o 00:02:29.904 CC lib/nvmf/nvmf.o 00:02:29.904 CC lib/nvmf/subsystem.o 00:02:29.904 CC lib/ftl/ftl_p2l_log.o 00:02:29.904 CC lib/nvmf/nvmf_rpc.o 00:02:29.904 CC lib/nvmf/stubs.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt.o 00:02:29.904 CC lib/nvmf/transport.o 00:02:29.904 CC lib/nvmf/tcp.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:29.904 CC lib/nvmf/mdns_server.o 00:02:29.904 CC lib/nvmf/vfio_user.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:29.904 CC lib/nvmf/rdma.o 00:02:29.904 CC lib/nvmf/auth.o 00:02:29.904 CC lib/scsi/dev.o 00:02:29.904 CC lib/scsi/lun.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:29.904 CC lib/scsi/scsi.o 00:02:29.904 CC lib/scsi/port.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:29.904 CC lib/scsi/scsi_bdev.o 00:02:29.904 CC lib/scsi/scsi_pr.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:29.904 CC lib/ublk/ublk.o 00:02:29.904 CC lib/ftl/utils/ftl_conf.o 00:02:29.904 CC lib/ftl/utils/ftl_md.o 00:02:29.904 CC lib/scsi/scsi_rpc.o 00:02:29.904 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:29.904 CC lib/scsi/task.o 00:02:29.904 CC lib/ftl/utils/ftl_bitmap.o 00:02:29.904 CC lib/ublk/ublk_rpc.o 00:02:29.904 CC lib/ftl/utils/ftl_mempool.o 00:02:29.904 CC lib/ftl/utils/ftl_property.o 00:02:29.904 CC lib/nbd/nbd_rpc.o 00:02:29.904 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:29.904 CC lib/nbd/nbd.o 00:02:29.904 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:29.904 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:29.904 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:29.904 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:29.904 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:29.904 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:29.904 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:29.904 CC lib/ftl/base/ftl_base_dev.o 00:02:29.904 CC lib/ftl/base/ftl_base_bdev.o 00:02:30.163 CC lib/ftl/ftl_trace.o 00:02:30.731 LIB libspdk_scsi.a 00:02:30.731 SO libspdk_scsi.so.9.0 00:02:30.731 LIB libspdk_nbd.a 00:02:30.731 SO libspdk_nbd.so.7.0 00:02:30.731 SYMLINK libspdk_scsi.so 00:02:30.731 SYMLINK libspdk_nbd.so 00:02:30.731 LIB libspdk_ublk.a 00:02:30.731 SO libspdk_ublk.so.3.0 00:02:30.990 SYMLINK libspdk_ublk.so 00:02:30.990 LIB libspdk_ftl.a 00:02:30.990 CC lib/vhost/vhost.o 00:02:30.990 CC lib/vhost/vhost_rpc.o 00:02:30.990 CC lib/vhost/vhost_blk.o 00:02:30.990 CC lib/iscsi/conn.o 00:02:30.990 CC lib/vhost/vhost_scsi.o 00:02:30.990 CC lib/iscsi/init_grp.o 00:02:30.990 CC lib/iscsi/iscsi.o 00:02:30.990 CC lib/vhost/rte_vhost_user.o 00:02:30.990 CC lib/iscsi/tgt_node.o 00:02:30.990 CC lib/iscsi/param.o 00:02:30.990 CC lib/iscsi/portal_grp.o 00:02:30.990 CC lib/iscsi/iscsi_subsystem.o 00:02:30.990 CC lib/iscsi/iscsi_rpc.o 00:02:30.990 CC lib/iscsi/task.o 00:02:30.990 SO libspdk_ftl.so.9.0 00:02:31.253 SYMLINK libspdk_ftl.so 00:02:31.820 LIB libspdk_nvmf.a 00:02:31.820 SO libspdk_nvmf.so.20.0 00:02:31.820 LIB libspdk_vhost.a 00:02:31.820 SO libspdk_vhost.so.8.0 00:02:31.820 SYMLINK libspdk_nvmf.so 00:02:32.080 SYMLINK libspdk_vhost.so 00:02:32.080 LIB libspdk_iscsi.a 00:02:32.080 SO libspdk_iscsi.so.8.0 00:02:32.339 SYMLINK libspdk_iscsi.so 00:02:32.598 CC module/vfu_device/vfu_virtio.o 00:02:32.598 CC module/vfu_device/vfu_virtio_blk.o 00:02:32.598 CC module/vfu_device/vfu_virtio_scsi.o 00:02:32.598 CC module/vfu_device/vfu_virtio_rpc.o 00:02:32.598 CC module/vfu_device/vfu_virtio_fs.o 00:02:32.598 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.857 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.857 CC module/sock/posix/posix.o 00:02:32.857 LIB libspdk_env_dpdk_rpc.a 00:02:32.857 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.857 CC module/keyring/linux/keyring_rpc.o 00:02:32.857 CC module/keyring/linux/keyring.o 00:02:32.857 CC module/fsdev/aio/fsdev_aio.o 00:02:32.857 CC module/fsdev/aio/linux_aio_mgr.o 00:02:32.857 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:32.857 CC module/blob/bdev/blob_bdev.o 00:02:32.857 CC module/accel/iaa/accel_iaa.o 00:02:32.857 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.857 CC module/accel/dsa/accel_dsa.o 00:02:32.857 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.857 CC module/keyring/file/keyring_rpc.o 00:02:32.857 CC module/keyring/file/keyring.o 00:02:32.857 CC module/accel/error/accel_error.o 00:02:32.857 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.857 CC module/accel/error/accel_error_rpc.o 00:02:32.857 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.857 CC module/accel/ioat/accel_ioat.o 00:02:32.857 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.857 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.857 LIB libspdk_scheduler_gscheduler.a 00:02:32.857 SO libspdk_scheduler_gscheduler.so.4.0 00:02:33.116 LIB libspdk_keyring_linux.a 00:02:33.116 LIB libspdk_scheduler_dynamic.a 00:02:33.116 LIB libspdk_scheduler_dpdk_governor.a 00:02:33.116 LIB libspdk_keyring_file.a 00:02:33.116 SYMLINK libspdk_scheduler_gscheduler.so 00:02:33.116 SO libspdk_scheduler_dynamic.so.4.0 00:02:33.116 SO libspdk_keyring_linux.so.1.0 00:02:33.116 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:33.116 LIB libspdk_accel_error.a 00:02:33.116 LIB libspdk_accel_iaa.a 00:02:33.116 LIB libspdk_accel_ioat.a 00:02:33.116 SO libspdk_keyring_file.so.2.0 00:02:33.116 SO libspdk_accel_error.so.2.0 00:02:33.116 SYMLINK libspdk_keyring_linux.so 00:02:33.116 SO libspdk_accel_iaa.so.3.0 00:02:33.116 SO libspdk_accel_ioat.so.6.0 00:02:33.116 SYMLINK libspdk_scheduler_dynamic.so 00:02:33.116 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:33.116 LIB libspdk_blob_bdev.a 00:02:33.116 SYMLINK libspdk_keyring_file.so 00:02:33.116 LIB libspdk_accel_dsa.a 00:02:33.116 SO libspdk_blob_bdev.so.11.0 00:02:33.116 SYMLINK libspdk_accel_error.so 00:02:33.116 SO libspdk_accel_dsa.so.5.0 00:02:33.116 SYMLINK libspdk_accel_ioat.so 00:02:33.116 SYMLINK libspdk_accel_iaa.so 00:02:33.116 SYMLINK libspdk_blob_bdev.so 00:02:33.116 LIB libspdk_vfu_device.a 00:02:33.116 SYMLINK libspdk_accel_dsa.so 00:02:33.116 SO libspdk_vfu_device.so.3.0 00:02:33.376 SYMLINK libspdk_vfu_device.so 00:02:33.376 LIB libspdk_fsdev_aio.a 00:02:33.376 SO libspdk_fsdev_aio.so.1.0 00:02:33.376 LIB libspdk_sock_posix.a 00:02:33.376 SO libspdk_sock_posix.so.6.0 00:02:33.376 SYMLINK libspdk_fsdev_aio.so 00:02:33.635 SYMLINK libspdk_sock_posix.so 00:02:33.635 CC module/blobfs/bdev/blobfs_bdev.o 00:02:33.635 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:33.635 CC module/bdev/delay/vbdev_delay.o 00:02:33.635 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:33.635 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:33.635 CC module/bdev/malloc/bdev_malloc.o 00:02:33.635 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:33.635 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.635 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:33.635 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.635 CC module/bdev/error/vbdev_error.o 00:02:33.635 CC module/bdev/error/vbdev_error_rpc.o 00:02:33.635 CC module/bdev/raid/bdev_raid_rpc.o 00:02:33.635 CC module/bdev/ftl/bdev_ftl.o 00:02:33.635 CC module/bdev/raid/bdev_raid.o 00:02:33.635 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.635 CC module/bdev/raid/bdev_raid_sb.o 00:02:33.635 CC module/bdev/gpt/gpt.o 00:02:33.635 CC module/bdev/raid/raid0.o 00:02:33.635 CC module/bdev/aio/bdev_aio.o 00:02:33.635 CC module/bdev/raid/concat.o 00:02:33.635 CC module/bdev/gpt/vbdev_gpt.o 00:02:33.635 CC module/bdev/raid/raid1.o 00:02:33.635 CC module/bdev/aio/bdev_aio_rpc.o 00:02:33.635 CC module/bdev/split/vbdev_split.o 00:02:33.635 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.635 CC module/bdev/null/bdev_null.o 00:02:33.635 CC module/bdev/nvme/bdev_nvme.o 00:02:33.635 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.635 CC module/bdev/null/bdev_null_rpc.o 00:02:33.635 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:33.635 CC module/bdev/nvme/bdev_mdns_client.o 00:02:33.635 CC module/bdev/nvme/nvme_rpc.o 00:02:33.635 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.635 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.635 CC module/bdev/nvme/vbdev_opal.o 00:02:33.635 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:33.635 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:33.635 CC module/bdev/iscsi/bdev_iscsi.o 00:02:33.635 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:33.635 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.635 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.893 LIB libspdk_blobfs_bdev.a 00:02:33.893 SO libspdk_blobfs_bdev.so.6.0 00:02:33.893 SYMLINK libspdk_blobfs_bdev.so 00:02:33.893 LIB libspdk_bdev_error.a 00:02:33.893 LIB libspdk_bdev_null.a 00:02:33.893 LIB libspdk_bdev_gpt.a 00:02:33.893 SO libspdk_bdev_null.so.6.0 00:02:33.893 LIB libspdk_bdev_passthru.a 00:02:33.893 SO libspdk_bdev_error.so.6.0 00:02:33.893 SO libspdk_bdev_gpt.so.6.0 00:02:33.893 LIB libspdk_bdev_delay.a 00:02:33.893 LIB libspdk_bdev_ftl.a 00:02:33.893 LIB libspdk_bdev_split.a 00:02:33.893 LIB libspdk_bdev_zone_block.a 00:02:33.893 LIB libspdk_bdev_malloc.a 00:02:33.893 SO libspdk_bdev_passthru.so.6.0 00:02:33.893 LIB libspdk_bdev_aio.a 00:02:33.893 SYMLINK libspdk_bdev_null.so 00:02:33.893 SO libspdk_bdev_delay.so.6.0 00:02:33.893 SO libspdk_bdev_zone_block.so.6.0 00:02:34.152 SO libspdk_bdev_ftl.so.6.0 00:02:34.152 SO libspdk_bdev_malloc.so.6.0 00:02:34.152 SO libspdk_bdev_split.so.6.0 00:02:34.152 SYMLINK libspdk_bdev_error.so 00:02:34.152 SYMLINK libspdk_bdev_gpt.so 00:02:34.152 SYMLINK libspdk_bdev_passthru.so 00:02:34.152 SO libspdk_bdev_aio.so.6.0 00:02:34.152 LIB libspdk_bdev_iscsi.a 00:02:34.152 SYMLINK libspdk_bdev_delay.so 00:02:34.152 SYMLINK libspdk_bdev_zone_block.so 00:02:34.152 SYMLINK libspdk_bdev_ftl.so 00:02:34.153 LIB libspdk_bdev_lvol.a 00:02:34.153 SYMLINK libspdk_bdev_split.so 00:02:34.153 SYMLINK libspdk_bdev_malloc.so 00:02:34.153 SYMLINK libspdk_bdev_aio.so 00:02:34.153 SO libspdk_bdev_iscsi.so.6.0 00:02:34.153 SO libspdk_bdev_lvol.so.6.0 00:02:34.153 LIB libspdk_bdev_virtio.a 00:02:34.153 SYMLINK libspdk_bdev_lvol.so 00:02:34.153 SYMLINK libspdk_bdev_iscsi.so 00:02:34.153 SO libspdk_bdev_virtio.so.6.0 00:02:34.153 SYMLINK libspdk_bdev_virtio.so 00:02:34.412 LIB libspdk_bdev_raid.a 00:02:34.412 SO libspdk_bdev_raid.so.6.0 00:02:34.670 SYMLINK libspdk_bdev_raid.so 00:02:35.607 LIB libspdk_bdev_nvme.a 00:02:35.607 SO libspdk_bdev_nvme.so.7.1 00:02:35.607 SYMLINK libspdk_bdev_nvme.so 00:02:36.175 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:36.175 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:36.175 CC module/event/subsystems/vmd/vmd.o 00:02:36.175 CC module/event/subsystems/sock/sock.o 00:02:36.175 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:36.175 CC module/event/subsystems/fsdev/fsdev.o 00:02:36.175 CC module/event/subsystems/iobuf/iobuf.o 00:02:36.175 CC module/event/subsystems/scheduler/scheduler.o 00:02:36.175 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:36.175 CC module/event/subsystems/keyring/keyring.o 00:02:36.435 LIB libspdk_event_vfu_tgt.a 00:02:36.435 LIB libspdk_event_vhost_blk.a 00:02:36.435 LIB libspdk_event_sock.a 00:02:36.435 LIB libspdk_event_keyring.a 00:02:36.435 SO libspdk_event_vfu_tgt.so.3.0 00:02:36.435 LIB libspdk_event_vmd.a 00:02:36.435 LIB libspdk_event_fsdev.a 00:02:36.435 SO libspdk_event_vhost_blk.so.3.0 00:02:36.435 LIB libspdk_event_scheduler.a 00:02:36.435 LIB libspdk_event_iobuf.a 00:02:36.435 SO libspdk_event_keyring.so.1.0 00:02:36.435 SO libspdk_event_sock.so.5.0 00:02:36.435 SO libspdk_event_fsdev.so.1.0 00:02:36.435 SO libspdk_event_vmd.so.6.0 00:02:36.435 SO libspdk_event_scheduler.so.4.0 00:02:36.435 SYMLINK libspdk_event_vfu_tgt.so 00:02:36.435 SO libspdk_event_iobuf.so.3.0 00:02:36.435 SYMLINK libspdk_event_vhost_blk.so 00:02:36.435 SYMLINK libspdk_event_sock.so 00:02:36.435 SYMLINK libspdk_event_keyring.so 00:02:36.435 SYMLINK libspdk_event_fsdev.so 00:02:36.435 SYMLINK libspdk_event_vmd.so 00:02:36.435 SYMLINK libspdk_event_scheduler.so 00:02:36.435 SYMLINK libspdk_event_iobuf.so 00:02:36.694 CC module/event/subsystems/accel/accel.o 00:02:36.954 LIB libspdk_event_accel.a 00:02:36.954 SO libspdk_event_accel.so.6.0 00:02:36.954 SYMLINK libspdk_event_accel.so 00:02:37.214 CC module/event/subsystems/bdev/bdev.o 00:02:37.473 LIB libspdk_event_bdev.a 00:02:37.473 SO libspdk_event_bdev.so.6.0 00:02:37.473 SYMLINK libspdk_event_bdev.so 00:02:37.733 CC module/event/subsystems/ublk/ublk.o 00:02:37.733 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:37.733 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:37.733 CC module/event/subsystems/scsi/scsi.o 00:02:37.733 CC module/event/subsystems/nbd/nbd.o 00:02:37.991 LIB libspdk_event_ublk.a 00:02:37.991 SO libspdk_event_ublk.so.3.0 00:02:37.991 LIB libspdk_event_nbd.a 00:02:37.991 LIB libspdk_event_scsi.a 00:02:37.991 LIB libspdk_event_nvmf.a 00:02:37.991 SO libspdk_event_nbd.so.6.0 00:02:37.991 SO libspdk_event_scsi.so.6.0 00:02:37.991 SYMLINK libspdk_event_ublk.so 00:02:37.991 SO libspdk_event_nvmf.so.6.0 00:02:37.991 SYMLINK libspdk_event_nbd.so 00:02:37.991 SYMLINK libspdk_event_scsi.so 00:02:38.249 SYMLINK libspdk_event_nvmf.so 00:02:38.508 CC module/event/subsystems/iscsi/iscsi.o 00:02:38.508 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:38.508 LIB libspdk_event_vhost_scsi.a 00:02:38.508 LIB libspdk_event_iscsi.a 00:02:38.508 SO libspdk_event_vhost_scsi.so.3.0 00:02:38.508 SO libspdk_event_iscsi.so.6.0 00:02:38.508 SYMLINK libspdk_event_vhost_scsi.so 00:02:38.508 SYMLINK libspdk_event_iscsi.so 00:02:38.767 SO libspdk.so.6.0 00:02:38.767 SYMLINK libspdk.so 00:02:39.025 CXX app/trace/trace.o 00:02:39.026 CC app/spdk_nvme_identify/identify.o 00:02:39.026 CC app/spdk_top/spdk_top.o 00:02:39.026 CC test/rpc_client/rpc_client_test.o 00:02:39.026 CC app/spdk_lspci/spdk_lspci.o 00:02:39.026 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.026 CC app/trace_record/trace_record.o 00:02:39.026 TEST_HEADER include/spdk/accel.h 00:02:39.026 CC app/spdk_nvme_perf/perf.o 00:02:39.026 TEST_HEADER include/spdk/accel_module.h 00:02:39.026 TEST_HEADER include/spdk/base64.h 00:02:39.026 TEST_HEADER include/spdk/bdev_module.h 00:02:39.026 TEST_HEADER include/spdk/barrier.h 00:02:39.026 TEST_HEADER include/spdk/assert.h 00:02:39.026 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.026 TEST_HEADER include/spdk/bdev.h 00:02:39.026 TEST_HEADER include/spdk/bit_pool.h 00:02:39.026 TEST_HEADER include/spdk/bit_array.h 00:02:39.026 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.026 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.026 TEST_HEADER include/spdk/blob.h 00:02:39.026 TEST_HEADER include/spdk/conf.h 00:02:39.026 TEST_HEADER include/spdk/blobfs.h 00:02:39.026 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.026 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.296 TEST_HEADER include/spdk/crc64.h 00:02:39.296 TEST_HEADER include/spdk/config.h 00:02:39.296 TEST_HEADER include/spdk/crc16.h 00:02:39.296 TEST_HEADER include/spdk/cpuset.h 00:02:39.296 TEST_HEADER include/spdk/crc32.h 00:02:39.296 TEST_HEADER include/spdk/dma.h 00:02:39.296 CC app/spdk_dd/spdk_dd.o 00:02:39.296 TEST_HEADER include/spdk/endian.h 00:02:39.296 TEST_HEADER include/spdk/dif.h 00:02:39.296 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.296 TEST_HEADER include/spdk/env.h 00:02:39.296 TEST_HEADER include/spdk/fd_group.h 00:02:39.296 TEST_HEADER include/spdk/fd.h 00:02:39.296 TEST_HEADER include/spdk/file.h 00:02:39.296 TEST_HEADER include/spdk/event.h 00:02:39.296 TEST_HEADER include/spdk/ftl.h 00:02:39.296 TEST_HEADER include/spdk/fsdev_module.h 00:02:39.296 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:39.296 TEST_HEADER include/spdk/fsdev.h 00:02:39.296 TEST_HEADER include/spdk/hexlify.h 00:02:39.296 TEST_HEADER include/spdk/histogram_data.h 00:02:39.296 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.296 TEST_HEADER include/spdk/idxd.h 00:02:39.296 TEST_HEADER include/spdk/init.h 00:02:39.296 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.296 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.296 TEST_HEADER include/spdk/ioat.h 00:02:39.296 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.296 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.296 TEST_HEADER include/spdk/json.h 00:02:39.296 TEST_HEADER include/spdk/keyring.h 00:02:39.296 TEST_HEADER include/spdk/likely.h 00:02:39.296 TEST_HEADER include/spdk/log.h 00:02:39.296 TEST_HEADER include/spdk/keyring_module.h 00:02:39.296 TEST_HEADER include/spdk/lvol.h 00:02:39.296 TEST_HEADER include/spdk/md5.h 00:02:39.296 TEST_HEADER include/spdk/mmio.h 00:02:39.296 TEST_HEADER include/spdk/memory.h 00:02:39.296 TEST_HEADER include/spdk/nbd.h 00:02:39.296 CC app/nvmf_tgt/nvmf_main.o 00:02:39.296 TEST_HEADER include/spdk/net.h 00:02:39.296 TEST_HEADER include/spdk/nvme.h 00:02:39.296 TEST_HEADER include/spdk/notify.h 00:02:39.296 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.296 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.296 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.296 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.296 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.296 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.296 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.296 TEST_HEADER include/spdk/nvmf.h 00:02:39.296 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.296 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.296 TEST_HEADER include/spdk/opal_spec.h 00:02:39.296 TEST_HEADER include/spdk/opal.h 00:02:39.296 TEST_HEADER include/spdk/pipe.h 00:02:39.296 TEST_HEADER include/spdk/pci_ids.h 00:02:39.296 TEST_HEADER include/spdk/reduce.h 00:02:39.296 TEST_HEADER include/spdk/queue.h 00:02:39.296 TEST_HEADER include/spdk/rpc.h 00:02:39.296 TEST_HEADER include/spdk/scsi.h 00:02:39.296 TEST_HEADER include/spdk/scheduler.h 00:02:39.296 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.296 TEST_HEADER include/spdk/sock.h 00:02:39.296 TEST_HEADER include/spdk/stdinc.h 00:02:39.296 TEST_HEADER include/spdk/thread.h 00:02:39.296 TEST_HEADER include/spdk/trace.h 00:02:39.296 TEST_HEADER include/spdk/string.h 00:02:39.296 TEST_HEADER include/spdk/trace_parser.h 00:02:39.296 TEST_HEADER include/spdk/tree.h 00:02:39.296 TEST_HEADER include/spdk/ublk.h 00:02:39.296 TEST_HEADER include/spdk/util.h 00:02:39.296 TEST_HEADER include/spdk/uuid.h 00:02:39.296 TEST_HEADER include/spdk/version.h 00:02:39.296 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.296 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.296 TEST_HEADER include/spdk/vhost.h 00:02:39.296 TEST_HEADER include/spdk/vmd.h 00:02:39.296 TEST_HEADER include/spdk/xor.h 00:02:39.296 TEST_HEADER include/spdk/zipf.h 00:02:39.296 CXX test/cpp_headers/accel.o 00:02:39.296 CXX test/cpp_headers/assert.o 00:02:39.296 CXX test/cpp_headers/barrier.o 00:02:39.296 CXX test/cpp_headers/base64.o 00:02:39.296 CXX test/cpp_headers/accel_module.o 00:02:39.296 CXX test/cpp_headers/bdev.o 00:02:39.296 CXX test/cpp_headers/bit_array.o 00:02:39.296 CXX test/cpp_headers/bdev_zone.o 00:02:39.296 CXX test/cpp_headers/bdev_module.o 00:02:39.296 CXX test/cpp_headers/bit_pool.o 00:02:39.296 CC app/spdk_tgt/spdk_tgt.o 00:02:39.296 CXX test/cpp_headers/blobfs.o 00:02:39.296 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.296 CXX test/cpp_headers/blob_bdev.o 00:02:39.296 CXX test/cpp_headers/conf.o 00:02:39.296 CXX test/cpp_headers/config.o 00:02:39.296 CXX test/cpp_headers/blob.o 00:02:39.296 CXX test/cpp_headers/cpuset.o 00:02:39.296 CXX test/cpp_headers/crc16.o 00:02:39.296 CXX test/cpp_headers/crc32.o 00:02:39.296 CXX test/cpp_headers/dif.o 00:02:39.296 CXX test/cpp_headers/crc64.o 00:02:39.296 CXX test/cpp_headers/endian.o 00:02:39.296 CXX test/cpp_headers/dma.o 00:02:39.296 CXX test/cpp_headers/env.o 00:02:39.296 CXX test/cpp_headers/event.o 00:02:39.296 CXX test/cpp_headers/env_dpdk.o 00:02:39.296 CXX test/cpp_headers/fd_group.o 00:02:39.296 CXX test/cpp_headers/fd.o 00:02:39.296 CXX test/cpp_headers/fsdev.o 00:02:39.296 CXX test/cpp_headers/file.o 00:02:39.296 CXX test/cpp_headers/fsdev_module.o 00:02:39.296 CXX test/cpp_headers/ftl.o 00:02:39.297 CXX test/cpp_headers/fuse_dispatcher.o 00:02:39.297 CXX test/cpp_headers/histogram_data.o 00:02:39.297 CXX test/cpp_headers/idxd.o 00:02:39.297 CXX test/cpp_headers/gpt_spec.o 00:02:39.297 CXX test/cpp_headers/idxd_spec.o 00:02:39.297 CXX test/cpp_headers/init.o 00:02:39.297 CXX test/cpp_headers/hexlify.o 00:02:39.297 CXX test/cpp_headers/ioat.o 00:02:39.297 CXX test/cpp_headers/ioat_spec.o 00:02:39.297 CXX test/cpp_headers/json.o 00:02:39.297 CXX test/cpp_headers/iscsi_spec.o 00:02:39.297 CXX test/cpp_headers/jsonrpc.o 00:02:39.297 CXX test/cpp_headers/keyring_module.o 00:02:39.297 CXX test/cpp_headers/keyring.o 00:02:39.297 CC test/env/pci/pci_ut.o 00:02:39.297 CXX test/cpp_headers/log.o 00:02:39.297 CXX test/cpp_headers/lvol.o 00:02:39.297 CXX test/cpp_headers/likely.o 00:02:39.297 CXX test/cpp_headers/memory.o 00:02:39.297 CXX test/cpp_headers/md5.o 00:02:39.297 CXX test/cpp_headers/nbd.o 00:02:39.297 CXX test/cpp_headers/mmio.o 00:02:39.297 CXX test/cpp_headers/net.o 00:02:39.297 CC examples/util/zipf/zipf.o 00:02:39.297 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.297 CXX test/cpp_headers/notify.o 00:02:39.297 CXX test/cpp_headers/nvme.o 00:02:39.297 CXX test/cpp_headers/nvme_intel.o 00:02:39.297 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.297 CXX test/cpp_headers/nvme_spec.o 00:02:39.297 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.297 CC test/thread/poller_perf/poller_perf.o 00:02:39.297 CC examples/ioat/verify/verify.o 00:02:39.297 CXX test/cpp_headers/nvme_zns.o 00:02:39.297 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.297 CXX test/cpp_headers/nvmf.o 00:02:39.297 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.297 CC test/env/memory/memory_ut.o 00:02:39.297 CXX test/cpp_headers/nvmf_spec.o 00:02:39.297 CXX test/cpp_headers/nvmf_transport.o 00:02:39.297 CC test/env/vtophys/vtophys.o 00:02:39.297 CXX test/cpp_headers/opal.o 00:02:39.297 CC examples/ioat/perf/perf.o 00:02:39.297 CC test/app/jsoncat/jsoncat.o 00:02:39.297 CC test/app/histogram_perf/histogram_perf.o 00:02:39.297 CC test/app/stub/stub.o 00:02:39.297 CC app/fio/nvme/fio_plugin.o 00:02:39.297 CC test/dma/test_dma/test_dma.o 00:02:39.297 CC test/app/bdev_svc/bdev_svc.o 00:02:39.297 LINK spdk_lspci 00:02:39.562 CC app/fio/bdev/fio_plugin.o 00:02:39.562 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.562 LINK spdk_nvme_discover 00:02:39.562 LINK interrupt_tgt 00:02:39.562 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.562 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.562 LINK iscsi_tgt 00:02:39.822 LINK rpc_client_test 00:02:39.822 LINK spdk_trace_record 00:02:39.822 LINK zipf 00:02:39.822 LINK poller_perf 00:02:39.822 LINK env_dpdk_post_init 00:02:39.822 LINK jsoncat 00:02:39.822 LINK histogram_perf 00:02:39.822 LINK spdk_tgt 00:02:39.822 CXX test/cpp_headers/opal_spec.o 00:02:39.822 CXX test/cpp_headers/pci_ids.o 00:02:39.822 CXX test/cpp_headers/pipe.o 00:02:39.822 CXX test/cpp_headers/queue.o 00:02:39.822 LINK verify 00:02:39.822 CXX test/cpp_headers/reduce.o 00:02:39.822 CXX test/cpp_headers/rpc.o 00:02:39.822 CXX test/cpp_headers/scheduler.o 00:02:39.822 CXX test/cpp_headers/scsi.o 00:02:39.822 CXX test/cpp_headers/scsi_spec.o 00:02:39.822 CXX test/cpp_headers/sock.o 00:02:39.822 CXX test/cpp_headers/stdinc.o 00:02:39.822 CXX test/cpp_headers/string.o 00:02:39.822 CXX test/cpp_headers/thread.o 00:02:39.822 CXX test/cpp_headers/trace.o 00:02:39.822 LINK nvmf_tgt 00:02:39.822 CXX test/cpp_headers/trace_parser.o 00:02:39.822 CXX test/cpp_headers/tree.o 00:02:39.822 CXX test/cpp_headers/ublk.o 00:02:39.822 CXX test/cpp_headers/util.o 00:02:39.822 CXX test/cpp_headers/uuid.o 00:02:39.822 CXX test/cpp_headers/version.o 00:02:39.822 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.822 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.822 CXX test/cpp_headers/vhost.o 00:02:39.822 CXX test/cpp_headers/vmd.o 00:02:39.822 CXX test/cpp_headers/zipf.o 00:02:39.822 CXX test/cpp_headers/xor.o 00:02:40.081 LINK spdk_trace 00:02:40.081 LINK vtophys 00:02:40.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:40.081 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:40.081 LINK bdev_svc 00:02:40.081 LINK stub 00:02:40.081 LINK ioat_perf 00:02:40.081 LINK pci_ut 00:02:40.081 LINK spdk_dd 00:02:40.340 CC test/event/reactor/reactor.o 00:02:40.340 CC test/event/reactor_perf/reactor_perf.o 00:02:40.340 CC test/event/event_perf/event_perf.o 00:02:40.340 CC examples/sock/hello_world/hello_sock.o 00:02:40.340 CC examples/vmd/led/led.o 00:02:40.340 CC examples/idxd/perf/perf.o 00:02:40.340 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.340 CC test/event/app_repeat/app_repeat.o 00:02:40.340 LINK nvme_fuzz 00:02:40.340 CC test/event/scheduler/scheduler.o 00:02:40.340 CC app/vhost/vhost.o 00:02:40.340 CC examples/thread/thread/thread_ex.o 00:02:40.340 LINK spdk_nvme_identify 00:02:40.340 LINK spdk_nvme 00:02:40.340 LINK mem_callbacks 00:02:40.340 LINK test_dma 00:02:40.340 LINK reactor_perf 00:02:40.340 LINK event_perf 00:02:40.340 LINK lsvmd 00:02:40.340 LINK reactor 00:02:40.340 LINK led 00:02:40.340 LINK vhost_fuzz 00:02:40.340 LINK app_repeat 00:02:40.598 LINK spdk_bdev 00:02:40.598 LINK hello_sock 00:02:40.598 LINK spdk_nvme_perf 00:02:40.598 LINK spdk_top 00:02:40.598 LINK vhost 00:02:40.598 LINK scheduler 00:02:40.598 LINK thread 00:02:40.598 LINK idxd_perf 00:02:40.856 LINK memory_ut 00:02:40.856 CC test/nvme/err_injection/err_injection.o 00:02:40.856 CC test/nvme/cuse/cuse.o 00:02:40.856 CC test/nvme/simple_copy/simple_copy.o 00:02:40.856 CC test/nvme/e2edp/nvme_dp.o 00:02:40.856 CC test/nvme/overhead/overhead.o 00:02:40.856 CC test/nvme/aer/aer.o 00:02:40.856 CC test/nvme/reserve/reserve.o 00:02:40.856 CC test/nvme/reset/reset.o 00:02:40.856 CC test/nvme/compliance/nvme_compliance.o 00:02:40.856 CC test/nvme/fdp/fdp.o 00:02:40.856 CC test/nvme/boot_partition/boot_partition.o 00:02:40.856 CC test/nvme/connect_stress/connect_stress.o 00:02:40.856 CC test/nvme/startup/startup.o 00:02:40.856 CC test/nvme/sgl/sgl.o 00:02:40.856 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:40.856 CC examples/nvme/reconnect/reconnect.o 00:02:40.856 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:40.856 CC test/nvme/fused_ordering/fused_ordering.o 00:02:40.856 CC examples/nvme/hello_world/hello_world.o 00:02:40.856 CC examples/nvme/arbitration/arbitration.o 00:02:40.856 CC examples/nvme/hotplug/hotplug.o 00:02:40.856 CC test/accel/dif/dif.o 00:02:40.856 CC examples/nvme/abort/abort.o 00:02:40.856 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:40.856 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:40.856 CC test/blobfs/mkfs/mkfs.o 00:02:41.115 CC examples/accel/perf/accel_perf.o 00:02:41.115 CC test/lvol/esnap/esnap.o 00:02:41.115 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:41.115 CC examples/blob/cli/blobcli.o 00:02:41.115 CC examples/blob/hello_world/hello_blob.o 00:02:41.115 LINK boot_partition 00:02:41.115 LINK doorbell_aers 00:02:41.115 LINK startup 00:02:41.115 LINK connect_stress 00:02:41.115 LINK cmb_copy 00:02:41.115 LINK reserve 00:02:41.115 LINK err_injection 00:02:41.115 LINK pmr_persistence 00:02:41.115 LINK fused_ordering 00:02:41.115 LINK simple_copy 00:02:41.115 LINK mkfs 00:02:41.115 LINK sgl 00:02:41.115 LINK overhead 00:02:41.115 LINK hello_world 00:02:41.115 LINK nvme_dp 00:02:41.115 LINK reset 00:02:41.115 LINK nvme_compliance 00:02:41.115 LINK hotplug 00:02:41.115 LINK aer 00:02:41.115 LINK arbitration 00:02:41.115 LINK fdp 00:02:41.373 LINK reconnect 00:02:41.373 LINK iscsi_fuzz 00:02:41.373 LINK abort 00:02:41.373 LINK hello_fsdev 00:02:41.373 LINK hello_blob 00:02:41.373 LINK nvme_manage 00:02:41.373 LINK accel_perf 00:02:41.373 LINK dif 00:02:41.373 LINK blobcli 00:02:41.940 CC examples/bdev/bdevperf/bdevperf.o 00:02:41.940 CC examples/bdev/hello_world/hello_bdev.o 00:02:41.940 LINK cuse 00:02:41.940 CC test/bdev/bdevio/bdevio.o 00:02:42.196 LINK hello_bdev 00:02:42.196 LINK bdevio 00:02:42.454 LINK bdevperf 00:02:43.021 CC examples/nvmf/nvmf/nvmf.o 00:02:43.280 LINK nvmf 00:02:44.656 LINK esnap 00:02:44.656 00:02:44.656 real 0m54.999s 00:02:44.656 user 7m57.024s 00:02:44.656 sys 3m36.523s 00:02:44.656 10:31:12 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:44.656 10:31:12 make -- common/autotest_common.sh@10 -- $ set +x 00:02:44.656 ************************************ 00:02:44.656 END TEST make 00:02:44.656 ************************************ 00:02:44.914 10:31:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:44.914 10:31:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:44.914 10:31:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:44.914 10:31:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.914 10:31:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:44.914 10:31:12 -- pm/common@44 -- $ pid=2410698 00:02:44.914 10:31:12 -- pm/common@50 -- $ kill -TERM 2410698 00:02:44.914 10:31:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.914 10:31:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:44.914 10:31:12 -- pm/common@44 -- $ pid=2410700 00:02:44.914 10:31:12 -- pm/common@50 -- $ kill -TERM 2410700 00:02:44.914 10:31:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.914 10:31:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:44.914 10:31:12 -- pm/common@44 -- $ pid=2410701 00:02:44.914 10:31:12 -- pm/common@50 -- $ kill -TERM 2410701 00:02:44.914 10:31:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.914 10:31:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:44.914 10:31:12 -- pm/common@44 -- $ pid=2410724 00:02:44.914 10:31:12 -- pm/common@50 -- $ sudo -E kill -TERM 2410724 00:02:44.914 10:31:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:44.914 10:31:12 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:44.914 10:31:12 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:44.914 10:31:12 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:44.914 10:31:12 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:44.914 10:31:12 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:44.914 10:31:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:44.914 10:31:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:44.914 10:31:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:44.914 10:31:12 -- scripts/common.sh@336 -- # IFS=.-: 00:02:44.914 10:31:12 -- scripts/common.sh@336 -- # read -ra ver1 00:02:44.914 10:31:12 -- scripts/common.sh@337 -- # IFS=.-: 00:02:44.914 10:31:12 -- scripts/common.sh@337 -- # read -ra ver2 00:02:44.914 10:31:12 -- scripts/common.sh@338 -- # local 'op=<' 00:02:44.914 10:31:12 -- scripts/common.sh@340 -- # ver1_l=2 00:02:44.914 10:31:12 -- scripts/common.sh@341 -- # ver2_l=1 00:02:44.914 10:31:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:44.914 10:31:12 -- scripts/common.sh@344 -- # case "$op" in 00:02:44.914 10:31:12 -- scripts/common.sh@345 -- # : 1 00:02:44.914 10:31:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:44.914 10:31:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:44.914 10:31:12 -- scripts/common.sh@365 -- # decimal 1 00:02:44.914 10:31:12 -- scripts/common.sh@353 -- # local d=1 00:02:44.914 10:31:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:44.914 10:31:12 -- scripts/common.sh@355 -- # echo 1 00:02:44.914 10:31:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:44.914 10:31:12 -- scripts/common.sh@366 -- # decimal 2 00:02:44.914 10:31:12 -- scripts/common.sh@353 -- # local d=2 00:02:44.914 10:31:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:44.914 10:31:12 -- scripts/common.sh@355 -- # echo 2 00:02:44.914 10:31:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:44.914 10:31:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:44.914 10:31:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:44.914 10:31:12 -- scripts/common.sh@368 -- # return 0 00:02:44.914 10:31:12 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:44.914 10:31:12 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.914 --rc genhtml_branch_coverage=1 00:02:44.914 --rc genhtml_function_coverage=1 00:02:44.914 --rc genhtml_legend=1 00:02:44.914 --rc geninfo_all_blocks=1 00:02:44.914 --rc geninfo_unexecuted_blocks=1 00:02:44.914 00:02:44.914 ' 00:02:44.914 10:31:12 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.914 --rc genhtml_branch_coverage=1 00:02:44.914 --rc genhtml_function_coverage=1 00:02:44.914 --rc genhtml_legend=1 00:02:44.914 --rc geninfo_all_blocks=1 00:02:44.914 --rc geninfo_unexecuted_blocks=1 00:02:44.914 00:02:44.914 ' 00:02:44.914 10:31:12 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.914 --rc genhtml_branch_coverage=1 00:02:44.914 --rc genhtml_function_coverage=1 00:02:44.914 --rc genhtml_legend=1 00:02:44.914 --rc geninfo_all_blocks=1 00:02:44.914 --rc geninfo_unexecuted_blocks=1 00:02:44.914 00:02:44.914 ' 00:02:44.914 10:31:12 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:44.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:44.914 --rc genhtml_branch_coverage=1 00:02:44.914 --rc genhtml_function_coverage=1 00:02:44.914 --rc genhtml_legend=1 00:02:44.914 --rc geninfo_all_blocks=1 00:02:44.914 --rc geninfo_unexecuted_blocks=1 00:02:44.914 00:02:44.914 ' 00:02:44.914 10:31:12 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:44.914 10:31:12 -- nvmf/common.sh@7 -- # uname -s 00:02:44.914 10:31:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:44.914 10:31:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:44.914 10:31:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:44.915 10:31:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:44.915 10:31:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:44.915 10:31:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:44.915 10:31:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:44.915 10:31:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:44.915 10:31:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:44.915 10:31:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:44.915 10:31:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:44.915 10:31:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:44.915 10:31:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:44.915 10:31:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:44.915 10:31:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:44.915 10:31:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:44.915 10:31:12 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:44.915 10:31:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:45.173 10:31:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:45.173 10:31:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.173 10:31:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.173 10:31:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.173 10:31:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.173 10:31:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.173 10:31:12 -- paths/export.sh@5 -- # export PATH 00:02:45.173 10:31:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.173 10:31:12 -- nvmf/common.sh@51 -- # : 0 00:02:45.173 10:31:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:45.173 10:31:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:45.173 10:31:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:45.173 10:31:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:45.173 10:31:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:45.173 10:31:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:45.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:45.173 10:31:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:45.173 10:31:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:45.173 10:31:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:45.173 10:31:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:45.173 10:31:12 -- spdk/autotest.sh@32 -- # uname -s 00:02:45.173 10:31:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:45.173 10:31:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:45.173 10:31:12 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.173 10:31:12 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:45.173 10:31:12 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:45.173 10:31:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:45.173 10:31:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:45.173 10:31:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:45.173 10:31:12 -- spdk/autotest.sh@48 -- # udevadm_pid=2473617 00:02:45.173 10:31:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:45.173 10:31:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:45.173 10:31:12 -- pm/common@17 -- # local monitor 00:02:45.173 10:31:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.174 10:31:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.174 10:31:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.174 10:31:12 -- pm/common@21 -- # date +%s 00:02:45.174 10:31:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.174 10:31:12 -- pm/common@21 -- # date +%s 00:02:45.174 10:31:12 -- pm/common@25 -- # sleep 1 00:02:45.174 10:31:12 -- pm/common@21 -- # date +%s 00:02:45.174 10:31:12 -- pm/common@21 -- # date +%s 00:02:45.174 10:31:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971872 00:02:45.174 10:31:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971872 00:02:45.174 10:31:12 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971872 00:02:45.174 10:31:12 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730971872 00:02:45.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971872_collect-vmstat.pm.log 00:02:45.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971872_collect-cpu-temp.pm.log 00:02:45.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971872_collect-cpu-load.pm.log 00:02:45.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730971872_collect-bmc-pm.bmc.pm.log 00:02:46.109 10:31:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:46.109 10:31:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:46.109 10:31:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:46.109 10:31:13 -- common/autotest_common.sh@10 -- # set +x 00:02:46.109 10:31:13 -- spdk/autotest.sh@59 -- # create_test_list 00:02:46.109 10:31:13 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:46.109 10:31:13 -- common/autotest_common.sh@10 -- # set +x 00:02:46.109 10:31:13 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:46.109 10:31:13 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.109 10:31:13 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.109 10:31:13 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:46.109 10:31:13 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.109 10:31:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:46.109 10:31:13 -- common/autotest_common.sh@1455 -- # uname 00:02:46.109 10:31:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:46.109 10:31:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:46.109 10:31:13 -- common/autotest_common.sh@1475 -- # uname 00:02:46.109 10:31:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:46.109 10:31:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:46.110 10:31:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:46.110 lcov: LCOV version 1.15 00:02:46.110 10:31:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:08.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:08.044 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:12.239 10:31:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:12.239 10:31:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:12.239 10:31:39 -- common/autotest_common.sh@10 -- # set +x 00:03:12.239 10:31:39 -- spdk/autotest.sh@78 -- # rm -f 00:03:12.239 10:31:39 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.145 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:14.145 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.145 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:14.405 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:14.405 10:31:42 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:14.405 10:31:42 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:14.405 10:31:42 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:14.405 10:31:42 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:14.405 10:31:42 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:14.405 10:31:42 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:14.405 10:31:42 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:14.405 10:31:42 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.405 10:31:42 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:14.405 10:31:42 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:14.405 10:31:42 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:14.405 10:31:42 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:14.405 10:31:42 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:14.405 10:31:42 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:14.405 10:31:42 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:14.664 No valid GPT data, bailing 00:03:14.664 10:31:42 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:14.664 10:31:42 -- scripts/common.sh@394 -- # pt= 00:03:14.664 10:31:42 -- scripts/common.sh@395 -- # return 1 00:03:14.664 10:31:42 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:14.664 1+0 records in 00:03:14.664 1+0 records out 00:03:14.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422422 s, 248 MB/s 00:03:14.664 10:31:42 -- spdk/autotest.sh@105 -- # sync 00:03:14.664 10:31:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:14.664 10:31:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:14.664 10:31:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:21.230 10:31:47 -- spdk/autotest.sh@111 -- # uname -s 00:03:21.230 10:31:47 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:21.230 10:31:47 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:21.231 10:31:47 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:22.606 Hugepages 00:03:22.606 node hugesize free / total 00:03:22.606 node0 1048576kB 0 / 0 00:03:22.606 node0 2048kB 0 / 0 00:03:22.606 node1 1048576kB 0 / 0 00:03:22.606 node1 2048kB 0 / 0 00:03:22.606 00:03:22.606 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:22.606 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:22.606 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:22.606 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:22.865 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:22.865 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:22.865 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:22.865 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:22.865 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:22.865 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:22.865 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:22.865 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:22.865 10:31:50 -- spdk/autotest.sh@117 -- # uname -s 00:03:22.865 10:31:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:22.865 10:31:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:22.865 10:31:50 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.505 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:25.505 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:26.443 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.443 10:31:53 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:27.380 10:31:54 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:27.380 10:31:54 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:27.380 10:31:54 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:27.380 10:31:54 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:27.380 10:31:54 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:27.380 10:31:54 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:27.380 10:31:54 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.380 10:31:54 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.380 10:31:54 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:27.380 10:31:54 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:27.380 10:31:54 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:27.380 10:31:54 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.673 Waiting for block devices as requested 00:03:30.673 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:30.673 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:30.673 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:30.933 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:30.933 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:30.933 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:30.933 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:31.192 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:31.192 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:31.192 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:31.451 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:31.451 10:31:58 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:31.451 10:31:58 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:31.451 10:31:58 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:31.451 10:31:58 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:31.451 10:31:58 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:31.451 10:31:58 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:31.451 10:31:58 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:31.451 10:31:58 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:31.451 10:31:58 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:31.451 10:31:58 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:31.451 10:31:58 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:31.451 10:31:58 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:31.451 10:31:59 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:31.451 10:31:59 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:31.451 10:31:59 -- common/autotest_common.sh@1541 -- # continue 00:03:31.451 10:31:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:31.451 10:31:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:31.451 10:31:59 -- common/autotest_common.sh@10 -- # set +x 00:03:31.451 10:31:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:31.451 10:31:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.451 10:31:59 -- common/autotest_common.sh@10 -- # set +x 00:03:31.451 10:31:59 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.740 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.740 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.999 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.258 10:32:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:35.258 10:32:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:35.258 10:32:02 -- common/autotest_common.sh@10 -- # set +x 00:03:35.258 10:32:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:35.258 10:32:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:35.258 10:32:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:35.258 10:32:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:35.258 10:32:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:35.258 10:32:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:35.258 10:32:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:35.258 10:32:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:35.258 10:32:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:35.258 10:32:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:35.258 10:32:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:35.259 10:32:02 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:35.259 10:32:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:35.259 10:32:02 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:35.259 10:32:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:35.259 10:32:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:35.259 10:32:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:35.259 10:32:02 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:35.259 10:32:02 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:35.259 10:32:02 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:35.259 10:32:02 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:35.259 10:32:02 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:35.259 10:32:02 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:35.259 10:32:02 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2488081 00:03:35.259 10:32:02 -- common/autotest_common.sh@1583 -- # waitforlisten 2488081 00:03:35.517 10:32:02 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.517 10:32:02 -- common/autotest_common.sh@833 -- # '[' -z 2488081 ']' 00:03:35.517 10:32:02 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.517 10:32:02 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:35.517 10:32:02 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.517 10:32:02 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:35.517 10:32:02 -- common/autotest_common.sh@10 -- # set +x 00:03:35.517 [2024-11-07 10:32:02.980737] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:35.518 [2024-11-07 10:32:02.980781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488081 ] 00:03:35.518 [2024-11-07 10:32:03.042336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.518 [2024-11-07 10:32:03.082487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.777 10:32:03 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:35.777 10:32:03 -- common/autotest_common.sh@866 -- # return 0 00:03:35.777 10:32:03 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:35.777 10:32:03 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:35.777 10:32:03 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:39.064 nvme0n1 00:03:39.064 10:32:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:39.064 [2024-11-07 10:32:06.466363] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:39.064 request: 00:03:39.064 { 00:03:39.064 "nvme_ctrlr_name": "nvme0", 00:03:39.064 "password": "test", 00:03:39.064 "method": "bdev_nvme_opal_revert", 00:03:39.064 "req_id": 1 00:03:39.064 } 00:03:39.064 Got JSON-RPC error response 00:03:39.064 response: 00:03:39.064 { 00:03:39.064 "code": -32602, 00:03:39.064 "message": "Invalid parameters" 00:03:39.064 } 00:03:39.064 10:32:06 -- common/autotest_common.sh@1589 -- # true 00:03:39.064 10:32:06 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:39.064 10:32:06 -- common/autotest_common.sh@1593 -- # killprocess 2488081 00:03:39.064 10:32:06 -- common/autotest_common.sh@952 -- # '[' -z 2488081 ']' 00:03:39.064 10:32:06 -- common/autotest_common.sh@956 -- # kill -0 2488081 00:03:39.064 10:32:06 -- common/autotest_common.sh@957 -- # uname 00:03:39.064 10:32:06 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:39.064 10:32:06 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2488081 00:03:39.064 10:32:06 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:39.064 10:32:06 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:39.064 10:32:06 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2488081' 00:03:39.064 killing process with pid 2488081 00:03:39.064 10:32:06 -- common/autotest_common.sh@971 -- # kill 2488081 00:03:39.064 10:32:06 -- common/autotest_common.sh@976 -- # wait 2488081 00:03:40.966 10:32:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:40.966 10:32:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:40.966 10:32:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.966 10:32:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:40.966 10:32:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:40.966 10:32:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:40.966 10:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.966 10:32:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:40.966 10:32:08 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.966 10:32:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.966 10:32:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.966 10:32:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.966 ************************************ 00:03:40.966 START TEST env 00:03:40.966 ************************************ 00:03:40.966 10:32:08 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:40.966 * Looking for test storage... 00:03:40.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:40.966 10:32:08 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:40.966 10:32:08 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:40.966 10:32:08 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:40.966 10:32:08 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:40.967 10:32:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.967 10:32:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.967 10:32:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.967 10:32:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.967 10:32:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.967 10:32:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.967 10:32:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.967 10:32:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.967 10:32:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.967 10:32:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.967 10:32:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.967 10:32:08 env -- scripts/common.sh@344 -- # case "$op" in 00:03:40.967 10:32:08 env -- scripts/common.sh@345 -- # : 1 00:03:40.967 10:32:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.967 10:32:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.967 10:32:08 env -- scripts/common.sh@365 -- # decimal 1 00:03:40.967 10:32:08 env -- scripts/common.sh@353 -- # local d=1 00:03:40.967 10:32:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.967 10:32:08 env -- scripts/common.sh@355 -- # echo 1 00:03:40.967 10:32:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.967 10:32:08 env -- scripts/common.sh@366 -- # decimal 2 00:03:40.967 10:32:08 env -- scripts/common.sh@353 -- # local d=2 00:03:40.967 10:32:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.967 10:32:08 env -- scripts/common.sh@355 -- # echo 2 00:03:40.967 10:32:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.967 10:32:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.967 10:32:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.967 10:32:08 env -- scripts/common.sh@368 -- # return 0 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:40.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.967 --rc genhtml_branch_coverage=1 00:03:40.967 --rc genhtml_function_coverage=1 00:03:40.967 --rc genhtml_legend=1 00:03:40.967 --rc geninfo_all_blocks=1 00:03:40.967 --rc geninfo_unexecuted_blocks=1 00:03:40.967 00:03:40.967 ' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:40.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.967 --rc genhtml_branch_coverage=1 00:03:40.967 --rc genhtml_function_coverage=1 00:03:40.967 --rc genhtml_legend=1 00:03:40.967 --rc geninfo_all_blocks=1 00:03:40.967 --rc geninfo_unexecuted_blocks=1 00:03:40.967 00:03:40.967 ' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:40.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.967 --rc genhtml_branch_coverage=1 00:03:40.967 --rc genhtml_function_coverage=1 00:03:40.967 --rc genhtml_legend=1 00:03:40.967 --rc geninfo_all_blocks=1 00:03:40.967 --rc geninfo_unexecuted_blocks=1 00:03:40.967 00:03:40.967 ' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:40.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.967 --rc genhtml_branch_coverage=1 00:03:40.967 --rc genhtml_function_coverage=1 00:03:40.967 --rc genhtml_legend=1 00:03:40.967 --rc geninfo_all_blocks=1 00:03:40.967 --rc geninfo_unexecuted_blocks=1 00:03:40.967 00:03:40.967 ' 00:03:40.967 10:32:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.967 10:32:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.967 ************************************ 00:03:40.967 START TEST env_memory 00:03:40.967 ************************************ 00:03:40.967 10:32:08 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:40.967 00:03:40.967 00:03:40.967 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.967 http://cunit.sourceforge.net/ 00:03:40.967 00:03:40.967 00:03:40.967 Suite: memory 00:03:40.967 Test: alloc and free memory map ...[2024-11-07 10:32:08.422423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:40.967 passed 00:03:40.967 Test: mem map translation ...[2024-11-07 10:32:08.441129] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:40.967 [2024-11-07 10:32:08.441143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:40.967 [2024-11-07 10:32:08.441177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:40.967 [2024-11-07 10:32:08.441183] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:40.967 passed 00:03:40.967 Test: mem map registration ...[2024-11-07 10:32:08.477754] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:40.967 [2024-11-07 10:32:08.477767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:40.967 passed 00:03:40.967 Test: mem map adjacent registrations ...passed 00:03:40.967 00:03:40.967 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.967 suites 1 1 n/a 0 0 00:03:40.967 tests 4 4 4 0 0 00:03:40.967 asserts 152 152 152 0 n/a 00:03:40.967 00:03:40.967 Elapsed time = 0.137 seconds 00:03:40.967 00:03:40.967 real 0m0.150s 00:03:40.967 user 0m0.139s 00:03:40.967 sys 0m0.010s 00:03:40.967 10:32:08 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:40.967 10:32:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:40.967 ************************************ 00:03:40.967 END TEST env_memory 00:03:40.967 ************************************ 00:03:40.967 10:32:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:40.967 10:32:08 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:40.967 10:32:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:40.967 ************************************ 00:03:40.967 START TEST env_vtophys 00:03:40.967 ************************************ 00:03:40.967 10:32:08 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:40.967 EAL: lib.eal log level changed from notice to debug 00:03:40.967 EAL: Detected lcore 0 as core 0 on socket 0 00:03:40.967 EAL: Detected lcore 1 as core 1 on socket 0 00:03:40.967 EAL: Detected lcore 2 as core 2 on socket 0 00:03:40.967 EAL: Detected lcore 3 as core 3 on socket 0 00:03:40.967 EAL: Detected lcore 4 as core 4 on socket 0 00:03:40.967 EAL: Detected lcore 5 as core 5 on socket 0 00:03:40.967 EAL: Detected lcore 6 as core 6 on socket 0 00:03:40.967 EAL: Detected lcore 7 as core 8 on socket 0 00:03:40.967 EAL: Detected lcore 8 as core 9 on socket 0 00:03:40.967 EAL: Detected lcore 9 as core 10 on socket 0 00:03:40.967 EAL: Detected lcore 10 as core 11 on socket 0 00:03:40.967 EAL: Detected lcore 11 as core 12 on socket 0 00:03:40.967 EAL: Detected lcore 12 as core 13 on socket 0 00:03:40.967 EAL: Detected lcore 13 as core 16 on socket 0 00:03:40.967 EAL: Detected lcore 14 as core 17 on socket 0 00:03:40.967 EAL: Detected lcore 15 as core 18 on socket 0 00:03:40.967 EAL: Detected lcore 16 as core 19 on socket 0 00:03:40.967 EAL: Detected lcore 17 as core 20 on socket 0 00:03:40.967 EAL: Detected lcore 18 as core 21 on socket 0 00:03:40.967 EAL: Detected lcore 19 as core 25 on socket 0 00:03:40.967 EAL: Detected lcore 20 as core 26 on socket 0 00:03:40.967 EAL: Detected lcore 21 as core 27 on socket 0 00:03:40.967 EAL: Detected lcore 22 as core 28 on socket 0 00:03:40.967 EAL: Detected lcore 23 as core 29 on socket 0 00:03:40.967 EAL: Detected lcore 24 as core 0 on socket 1 00:03:40.967 EAL: Detected lcore 25 as core 1 on socket 1 00:03:40.967 EAL: Detected lcore 26 as core 2 on socket 1 00:03:40.967 EAL: Detected lcore 27 as core 3 on socket 1 00:03:40.967 EAL: Detected lcore 28 as core 4 on socket 1 00:03:40.967 EAL: Detected lcore 29 as core 5 on socket 1 00:03:40.967 EAL: Detected lcore 30 as core 6 on socket 1 00:03:40.967 EAL: Detected lcore 31 as core 9 on socket 1 00:03:40.967 EAL: Detected lcore 32 as core 10 on socket 1 00:03:40.967 EAL: Detected lcore 33 as core 11 on socket 1 00:03:40.967 EAL: Detected lcore 34 as core 12 on socket 1 00:03:40.967 EAL: Detected lcore 35 as core 13 on socket 1 00:03:40.967 EAL: Detected lcore 36 as core 16 on socket 1 00:03:40.967 EAL: Detected lcore 37 as core 17 on socket 1 00:03:40.967 EAL: Detected lcore 38 as core 18 on socket 1 00:03:40.967 EAL: Detected lcore 39 as core 19 on socket 1 00:03:40.967 EAL: Detected lcore 40 as core 20 on socket 1 00:03:40.967 EAL: Detected lcore 41 as core 21 on socket 1 00:03:40.967 EAL: Detected lcore 42 as core 24 on socket 1 00:03:40.967 EAL: Detected lcore 43 as core 25 on socket 1 00:03:40.967 EAL: Detected lcore 44 as core 26 on socket 1 00:03:40.967 EAL: Detected lcore 45 as core 27 on socket 1 00:03:40.967 EAL: Detected lcore 46 as core 28 on socket 1 00:03:40.967 EAL: Detected lcore 47 as core 29 on socket 1 00:03:40.968 EAL: Detected lcore 48 as core 0 on socket 0 00:03:40.968 EAL: Detected lcore 49 as core 1 on socket 0 00:03:40.968 EAL: Detected lcore 50 as core 2 on socket 0 00:03:40.968 EAL: Detected lcore 51 as core 3 on socket 0 00:03:40.968 EAL: Detected lcore 52 as core 4 on socket 0 00:03:40.968 EAL: Detected lcore 53 as core 5 on socket 0 00:03:40.968 EAL: Detected lcore 54 as core 6 on socket 0 00:03:40.968 EAL: Detected lcore 55 as core 8 on socket 0 00:03:40.968 EAL: Detected lcore 56 as core 9 on socket 0 00:03:40.968 EAL: Detected lcore 57 as core 10 on socket 0 00:03:40.968 EAL: Detected lcore 58 as core 11 on socket 0 00:03:40.968 EAL: Detected lcore 59 as core 12 on socket 0 00:03:40.968 EAL: Detected lcore 60 as core 13 on socket 0 00:03:40.968 EAL: Detected lcore 61 as core 16 on socket 0 00:03:40.968 EAL: Detected lcore 62 as core 17 on socket 0 00:03:40.968 EAL: Detected lcore 63 as core 18 on socket 0 00:03:40.968 EAL: Detected lcore 64 as core 19 on socket 0 00:03:40.968 EAL: Detected lcore 65 as core 20 on socket 0 00:03:40.968 EAL: Detected lcore 66 as core 21 on socket 0 00:03:40.968 EAL: Detected lcore 67 as core 25 on socket 0 00:03:40.968 EAL: Detected lcore 68 as core 26 on socket 0 00:03:40.968 EAL: Detected lcore 69 as core 27 on socket 0 00:03:40.968 EAL: Detected lcore 70 as core 28 on socket 0 00:03:40.968 EAL: Detected lcore 71 as core 29 on socket 0 00:03:40.968 EAL: Detected lcore 72 as core 0 on socket 1 00:03:40.968 EAL: Detected lcore 73 as core 1 on socket 1 00:03:40.968 EAL: Detected lcore 74 as core 2 on socket 1 00:03:40.968 EAL: Detected lcore 75 as core 3 on socket 1 00:03:40.968 EAL: Detected lcore 76 as core 4 on socket 1 00:03:40.968 EAL: Detected lcore 77 as core 5 on socket 1 00:03:40.968 EAL: Detected lcore 78 as core 6 on socket 1 00:03:40.968 EAL: Detected lcore 79 as core 9 on socket 1 00:03:40.968 EAL: Detected lcore 80 as core 10 on socket 1 00:03:40.968 EAL: Detected lcore 81 as core 11 on socket 1 00:03:40.968 EAL: Detected lcore 82 as core 12 on socket 1 00:03:40.968 EAL: Detected lcore 83 as core 13 on socket 1 00:03:40.968 EAL: Detected lcore 84 as core 16 on socket 1 00:03:40.968 EAL: Detected lcore 85 as core 17 on socket 1 00:03:40.968 EAL: Detected lcore 86 as core 18 on socket 1 00:03:40.968 EAL: Detected lcore 87 as core 19 on socket 1 00:03:40.968 EAL: Detected lcore 88 as core 20 on socket 1 00:03:40.968 EAL: Detected lcore 89 as core 21 on socket 1 00:03:40.968 EAL: Detected lcore 90 as core 24 on socket 1 00:03:40.968 EAL: Detected lcore 91 as core 25 on socket 1 00:03:40.968 EAL: Detected lcore 92 as core 26 on socket 1 00:03:40.968 EAL: Detected lcore 93 as core 27 on socket 1 00:03:40.968 EAL: Detected lcore 94 as core 28 on socket 1 00:03:40.968 EAL: Detected lcore 95 as core 29 on socket 1 00:03:40.968 EAL: Maximum logical cores by configuration: 128 00:03:40.968 EAL: Detected CPU lcores: 96 00:03:40.968 EAL: Detected NUMA nodes: 2 00:03:40.968 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:40.968 EAL: Detected shared linkage of DPDK 00:03:40.968 EAL: No shared files mode enabled, IPC will be disabled 00:03:41.227 EAL: Bus pci wants IOVA as 'DC' 00:03:41.227 EAL: Buses did not request a specific IOVA mode. 00:03:41.227 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:41.227 EAL: Selected IOVA mode 'VA' 00:03:41.227 EAL: Probing VFIO support... 00:03:41.227 EAL: IOMMU type 1 (Type 1) is supported 00:03:41.227 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:41.227 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:41.227 EAL: VFIO support initialized 00:03:41.227 EAL: Ask a virtual area of 0x2e000 bytes 00:03:41.227 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:41.227 EAL: Setting up physically contiguous memory... 00:03:41.227 EAL: Setting maximum number of open files to 524288 00:03:41.227 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:41.227 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:41.227 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:41.227 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:41.227 EAL: Ask a virtual area of 0x61000 bytes 00:03:41.227 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:41.227 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:41.227 EAL: Ask a virtual area of 0x400000000 bytes 00:03:41.227 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:41.227 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:41.227 EAL: Hugepages will be freed exactly as allocated. 00:03:41.227 EAL: No shared files mode enabled, IPC is disabled 00:03:41.227 EAL: No shared files mode enabled, IPC is disabled 00:03:41.227 EAL: TSC frequency is ~2300000 KHz 00:03:41.227 EAL: Main lcore 0 is ready (tid=7fdb7cab5a00;cpuset=[0]) 00:03:41.227 EAL: Trying to obtain current memory policy. 00:03:41.227 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.227 EAL: Restoring previous memory policy: 0 00:03:41.227 EAL: request: mp_malloc_sync 00:03:41.227 EAL: No shared files mode enabled, IPC is disabled 00:03:41.227 EAL: Heap on socket 0 was expanded by 2MB 00:03:41.227 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:41.228 EAL: Mem event callback 'spdk:(nil)' registered 00:03:41.228 00:03:41.228 00:03:41.228 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.228 http://cunit.sourceforge.net/ 00:03:41.228 00:03:41.228 00:03:41.228 Suite: components_suite 00:03:41.228 Test: vtophys_malloc_test ...passed 00:03:41.228 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 4MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 4MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 6MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 6MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 10MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 10MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 18MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 18MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 34MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 34MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 66MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 66MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 130MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was shrunk by 130MB 00:03:41.228 EAL: Trying to obtain current memory policy. 00:03:41.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.228 EAL: Restoring previous memory policy: 4 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.228 EAL: request: mp_malloc_sync 00:03:41.228 EAL: No shared files mode enabled, IPC is disabled 00:03:41.228 EAL: Heap on socket 0 was expanded by 258MB 00:03:41.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.487 EAL: request: mp_malloc_sync 00:03:41.487 EAL: No shared files mode enabled, IPC is disabled 00:03:41.487 EAL: Heap on socket 0 was shrunk by 258MB 00:03:41.487 EAL: Trying to obtain current memory policy. 00:03:41.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.487 EAL: Restoring previous memory policy: 4 00:03:41.487 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.487 EAL: request: mp_malloc_sync 00:03:41.487 EAL: No shared files mode enabled, IPC is disabled 00:03:41.487 EAL: Heap on socket 0 was expanded by 514MB 00:03:41.487 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.745 EAL: request: mp_malloc_sync 00:03:41.745 EAL: No shared files mode enabled, IPC is disabled 00:03:41.745 EAL: Heap on socket 0 was shrunk by 514MB 00:03:41.745 EAL: Trying to obtain current memory policy. 00:03:41.745 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:41.745 EAL: Restoring previous memory policy: 4 00:03:41.745 EAL: Calling mem event callback 'spdk:(nil)' 00:03:41.745 EAL: request: mp_malloc_sync 00:03:41.745 EAL: No shared files mode enabled, IPC is disabled 00:03:41.745 EAL: Heap on socket 0 was expanded by 1026MB 00:03:42.003 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.261 EAL: request: mp_malloc_sync 00:03:42.261 EAL: No shared files mode enabled, IPC is disabled 00:03:42.261 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:42.261 passed 00:03:42.261 00:03:42.261 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.261 suites 1 1 n/a 0 0 00:03:42.261 tests 2 2 2 0 0 00:03:42.261 asserts 497 497 497 0 n/a 00:03:42.261 00:03:42.261 Elapsed time = 0.970 seconds 00:03:42.261 EAL: Calling mem event callback 'spdk:(nil)' 00:03:42.261 EAL: request: mp_malloc_sync 00:03:42.261 EAL: No shared files mode enabled, IPC is disabled 00:03:42.261 EAL: Heap on socket 0 was shrunk by 2MB 00:03:42.261 EAL: No shared files mode enabled, IPC is disabled 00:03:42.261 EAL: No shared files mode enabled, IPC is disabled 00:03:42.261 EAL: No shared files mode enabled, IPC is disabled 00:03:42.261 00:03:42.261 real 0m1.093s 00:03:42.261 user 0m0.642s 00:03:42.261 sys 0m0.418s 00:03:42.261 10:32:09 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:42.261 10:32:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 ************************************ 00:03:42.261 END TEST env_vtophys 00:03:42.261 ************************************ 00:03:42.261 10:32:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.261 10:32:09 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:42.261 10:32:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.261 10:32:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 ************************************ 00:03:42.261 START TEST env_pci 00:03:42.261 ************************************ 00:03:42.261 10:32:09 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:42.261 00:03:42.261 00:03:42.261 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.261 http://cunit.sourceforge.net/ 00:03:42.261 00:03:42.261 00:03:42.261 Suite: pci 00:03:42.261 Test: pci_hook ...[2024-11-07 10:32:09.769722] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2489336 has claimed it 00:03:42.261 EAL: Cannot find device (10000:00:01.0) 00:03:42.261 EAL: Failed to attach device on primary process 00:03:42.261 passed 00:03:42.261 00:03:42.261 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.261 suites 1 1 n/a 0 0 00:03:42.261 tests 1 1 1 0 0 00:03:42.261 asserts 25 25 25 0 n/a 00:03:42.261 00:03:42.261 Elapsed time = 0.028 seconds 00:03:42.261 00:03:42.261 real 0m0.048s 00:03:42.261 user 0m0.012s 00:03:42.261 sys 0m0.036s 00:03:42.261 10:32:09 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:42.261 10:32:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 ************************************ 00:03:42.261 END TEST env_pci 00:03:42.261 ************************************ 00:03:42.261 10:32:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:42.261 10:32:09 env -- env/env.sh@15 -- # uname 00:03:42.261 10:32:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:42.261 10:32:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:42.261 10:32:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.261 10:32:09 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:42.261 10:32:09 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:42.261 10:32:09 env -- common/autotest_common.sh@10 -- # set +x 00:03:42.261 ************************************ 00:03:42.261 START TEST env_dpdk_post_init 00:03:42.261 ************************************ 00:03:42.261 10:32:09 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:42.261 EAL: Detected CPU lcores: 96 00:03:42.261 EAL: Detected NUMA nodes: 2 00:03:42.261 EAL: Detected shared linkage of DPDK 00:03:42.261 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:42.261 EAL: Selected IOVA mode 'VA' 00:03:42.261 EAL: VFIO support initialized 00:03:42.261 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:42.520 EAL: Using IOMMU type 1 (Type 1) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:42.520 EAL: Ignore mapping IO port bar(1) 00:03:42.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:43.453 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:43.453 EAL: Ignore mapping IO port bar(1) 00:03:43.453 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:43.454 EAL: Ignore mapping IO port bar(1) 00:03:43.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:46.732 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:46.732 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:46.732 Starting DPDK initialization... 00:03:46.732 Starting SPDK post initialization... 00:03:46.732 SPDK NVMe probe 00:03:46.732 Attaching to 0000:5e:00.0 00:03:46.732 Attached to 0000:5e:00.0 00:03:46.732 Cleaning up... 00:03:46.732 00:03:46.732 real 0m4.314s 00:03:46.732 user 0m2.939s 00:03:46.732 sys 0m0.451s 00:03:46.732 10:32:14 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.732 10:32:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.732 ************************************ 00:03:46.732 END TEST env_dpdk_post_init 00:03:46.732 ************************************ 00:03:46.732 10:32:14 env -- env/env.sh@26 -- # uname 00:03:46.732 10:32:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:46.732 10:32:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.732 10:32:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.732 10:32:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.732 10:32:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.732 ************************************ 00:03:46.732 START TEST env_mem_callbacks 00:03:46.732 ************************************ 00:03:46.732 10:32:14 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:46.732 EAL: Detected CPU lcores: 96 00:03:46.732 EAL: Detected NUMA nodes: 2 00:03:46.732 EAL: Detected shared linkage of DPDK 00:03:46.732 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.732 EAL: Selected IOVA mode 'VA' 00:03:46.732 EAL: VFIO support initialized 00:03:46.732 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.732 00:03:46.732 00:03:46.732 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.732 http://cunit.sourceforge.net/ 00:03:46.732 00:03:46.732 00:03:46.732 Suite: memory 00:03:46.732 Test: test ... 00:03:46.732 register 0x200000200000 2097152 00:03:46.732 malloc 3145728 00:03:46.732 register 0x200000400000 4194304 00:03:46.732 buf 0x200000500000 len 3145728 PASSED 00:03:46.732 malloc 64 00:03:46.732 buf 0x2000004fff40 len 64 PASSED 00:03:46.732 malloc 4194304 00:03:46.732 register 0x200000800000 6291456 00:03:46.732 buf 0x200000a00000 len 4194304 PASSED 00:03:46.732 free 0x200000500000 3145728 00:03:46.732 free 0x2000004fff40 64 00:03:46.732 unregister 0x200000400000 4194304 PASSED 00:03:46.732 free 0x200000a00000 4194304 00:03:46.732 unregister 0x200000800000 6291456 PASSED 00:03:46.732 malloc 8388608 00:03:46.732 register 0x200000400000 10485760 00:03:46.732 buf 0x200000600000 len 8388608 PASSED 00:03:46.732 free 0x200000600000 8388608 00:03:46.732 unregister 0x200000400000 10485760 PASSED 00:03:46.732 passed 00:03:46.733 00:03:46.733 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.733 suites 1 1 n/a 0 0 00:03:46.733 tests 1 1 1 0 0 00:03:46.733 asserts 15 15 15 0 n/a 00:03:46.733 00:03:46.733 Elapsed time = 0.005 seconds 00:03:46.733 00:03:46.733 real 0m0.054s 00:03:46.733 user 0m0.020s 00:03:46.733 sys 0m0.034s 00:03:46.733 10:32:14 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.733 10:32:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:46.733 ************************************ 00:03:46.733 END TEST env_mem_callbacks 00:03:46.733 ************************************ 00:03:46.733 00:03:46.733 real 0m6.168s 00:03:46.733 user 0m3.991s 00:03:46.733 sys 0m1.251s 00:03:46.733 10:32:14 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:46.733 10:32:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.733 ************************************ 00:03:46.733 END TEST env 00:03:46.733 ************************************ 00:03:46.733 10:32:14 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.733 10:32:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:46.733 10:32:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:46.733 10:32:14 -- common/autotest_common.sh@10 -- # set +x 00:03:46.989 ************************************ 00:03:46.989 START TEST rpc 00:03:46.989 ************************************ 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:46.989 * Looking for test storage... 00:03:46.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.989 10:32:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.989 10:32:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.989 10:32:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.989 10:32:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.989 10:32:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.989 10:32:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.989 10:32:14 rpc -- scripts/common.sh@345 -- # : 1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.989 10:32:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.989 10:32:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.989 10:32:14 rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.989 10:32:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.989 10:32:14 rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.989 10:32:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.989 10:32:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.989 10:32:14 rpc -- scripts/common.sh@368 -- # return 0 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.989 10:32:14 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.990 --rc genhtml_branch_coverage=1 00:03:46.990 --rc genhtml_function_coverage=1 00:03:46.990 --rc genhtml_legend=1 00:03:46.990 --rc geninfo_all_blocks=1 00:03:46.990 --rc geninfo_unexecuted_blocks=1 00:03:46.990 00:03:46.990 ' 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.990 --rc genhtml_branch_coverage=1 00:03:46.990 --rc genhtml_function_coverage=1 00:03:46.990 --rc genhtml_legend=1 00:03:46.990 --rc geninfo_all_blocks=1 00:03:46.990 --rc geninfo_unexecuted_blocks=1 00:03:46.990 00:03:46.990 ' 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.990 --rc genhtml_branch_coverage=1 00:03:46.990 --rc genhtml_function_coverage=1 00:03:46.990 --rc genhtml_legend=1 00:03:46.990 --rc geninfo_all_blocks=1 00:03:46.990 --rc geninfo_unexecuted_blocks=1 00:03:46.990 00:03:46.990 ' 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:46.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.990 --rc genhtml_branch_coverage=1 00:03:46.990 --rc genhtml_function_coverage=1 00:03:46.990 --rc genhtml_legend=1 00:03:46.990 --rc geninfo_all_blocks=1 00:03:46.990 --rc geninfo_unexecuted_blocks=1 00:03:46.990 00:03:46.990 ' 00:03:46.990 10:32:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2490219 00:03:46.990 10:32:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:46.990 10:32:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:46.990 10:32:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2490219 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@833 -- # '[' -z 2490219 ']' 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:46.990 10:32:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.990 [2024-11-07 10:32:14.625328] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:46.990 [2024-11-07 10:32:14.625373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490219 ] 00:03:47.246 [2024-11-07 10:32:14.688225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.246 [2024-11-07 10:32:14.727752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:47.246 [2024-11-07 10:32:14.727791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2490219' to capture a snapshot of events at runtime. 00:03:47.246 [2024-11-07 10:32:14.727799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:47.246 [2024-11-07 10:32:14.727808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:47.246 [2024-11-07 10:32:14.727813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2490219 for offline analysis/debug. 00:03:47.246 [2024-11-07 10:32:14.728378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.504 10:32:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:47.504 10:32:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:47.504 10:32:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.504 10:32:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:47.504 10:32:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:47.504 10:32:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:47.504 10:32:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.504 10:32:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.504 10:32:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 ************************************ 00:03:47.504 START TEST rpc_integrity 00:03:47.504 ************************************ 00:03:47.504 10:32:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:47.504 10:32:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:47.504 10:32:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.504 10:32:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 10:32:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.504 10:32:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:47.504 10:32:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:47.504 { 00:03:47.504 "name": "Malloc0", 00:03:47.504 "aliases": [ 00:03:47.504 "a1b4230e-dfeb-4a4a-9588-36116a782ca3" 00:03:47.504 ], 00:03:47.504 "product_name": "Malloc disk", 00:03:47.504 "block_size": 512, 00:03:47.504 "num_blocks": 16384, 00:03:47.504 "uuid": "a1b4230e-dfeb-4a4a-9588-36116a782ca3", 00:03:47.504 "assigned_rate_limits": { 00:03:47.504 "rw_ios_per_sec": 0, 00:03:47.504 "rw_mbytes_per_sec": 0, 00:03:47.504 "r_mbytes_per_sec": 0, 00:03:47.504 "w_mbytes_per_sec": 0 00:03:47.504 }, 00:03:47.504 "claimed": false, 00:03:47.504 "zoned": false, 00:03:47.504 "supported_io_types": { 00:03:47.504 "read": true, 00:03:47.504 "write": true, 00:03:47.504 "unmap": true, 00:03:47.504 "flush": true, 00:03:47.504 "reset": true, 00:03:47.504 "nvme_admin": false, 00:03:47.504 "nvme_io": false, 00:03:47.504 "nvme_io_md": false, 00:03:47.504 "write_zeroes": true, 00:03:47.504 "zcopy": true, 00:03:47.504 "get_zone_info": false, 00:03:47.504 "zone_management": false, 00:03:47.504 "zone_append": false, 00:03:47.504 "compare": false, 00:03:47.504 "compare_and_write": false, 00:03:47.504 "abort": true, 00:03:47.504 "seek_hole": false, 00:03:47.504 "seek_data": false, 00:03:47.504 "copy": true, 00:03:47.504 "nvme_iov_md": false 00:03:47.504 }, 00:03:47.504 "memory_domains": [ 00:03:47.504 { 00:03:47.504 "dma_device_id": "system", 00:03:47.504 "dma_device_type": 1 00:03:47.504 }, 00:03:47.504 { 00:03:47.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.504 "dma_device_type": 2 00:03:47.504 } 00:03:47.504 ], 00:03:47.504 "driver_specific": {} 00:03:47.504 } 00:03:47.504 ]' 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:47.504 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:47.504 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 [2024-11-07 10:32:15.074912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:47.505 [2024-11-07 10:32:15.074943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:47.505 [2024-11-07 10:32:15.074956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x100d6d0 00:03:47.505 [2024-11-07 10:32:15.074963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:47.505 [2024-11-07 10:32:15.076075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:47.505 [2024-11-07 10:32:15.076098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:47.505 Passthru0 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:47.505 { 00:03:47.505 "name": "Malloc0", 00:03:47.505 "aliases": [ 00:03:47.505 "a1b4230e-dfeb-4a4a-9588-36116a782ca3" 00:03:47.505 ], 00:03:47.505 "product_name": "Malloc disk", 00:03:47.505 "block_size": 512, 00:03:47.505 "num_blocks": 16384, 00:03:47.505 "uuid": "a1b4230e-dfeb-4a4a-9588-36116a782ca3", 00:03:47.505 "assigned_rate_limits": { 00:03:47.505 "rw_ios_per_sec": 0, 00:03:47.505 "rw_mbytes_per_sec": 0, 00:03:47.505 "r_mbytes_per_sec": 0, 00:03:47.505 "w_mbytes_per_sec": 0 00:03:47.505 }, 00:03:47.505 "claimed": true, 00:03:47.505 "claim_type": "exclusive_write", 00:03:47.505 "zoned": false, 00:03:47.505 "supported_io_types": { 00:03:47.505 "read": true, 00:03:47.505 "write": true, 00:03:47.505 "unmap": true, 00:03:47.505 "flush": true, 00:03:47.505 "reset": true, 00:03:47.505 "nvme_admin": false, 00:03:47.505 "nvme_io": false, 00:03:47.505 "nvme_io_md": false, 00:03:47.505 "write_zeroes": true, 00:03:47.505 "zcopy": true, 00:03:47.505 "get_zone_info": false, 00:03:47.505 "zone_management": false, 00:03:47.505 "zone_append": false, 00:03:47.505 "compare": false, 00:03:47.505 "compare_and_write": false, 00:03:47.505 "abort": true, 00:03:47.505 "seek_hole": false, 00:03:47.505 "seek_data": false, 00:03:47.505 "copy": true, 00:03:47.505 "nvme_iov_md": false 00:03:47.505 }, 00:03:47.505 "memory_domains": [ 00:03:47.505 { 00:03:47.505 "dma_device_id": "system", 00:03:47.505 "dma_device_type": 1 00:03:47.505 }, 00:03:47.505 { 00:03:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.505 "dma_device_type": 2 00:03:47.505 } 00:03:47.505 ], 00:03:47.505 "driver_specific": {} 00:03:47.505 }, 00:03:47.505 { 00:03:47.505 "name": "Passthru0", 00:03:47.505 "aliases": [ 00:03:47.505 "e90c46c7-6172-5a3d-9e35-01701fafed6c" 00:03:47.505 ], 00:03:47.505 "product_name": "passthru", 00:03:47.505 "block_size": 512, 00:03:47.505 "num_blocks": 16384, 00:03:47.505 "uuid": "e90c46c7-6172-5a3d-9e35-01701fafed6c", 00:03:47.505 "assigned_rate_limits": { 00:03:47.505 "rw_ios_per_sec": 0, 00:03:47.505 "rw_mbytes_per_sec": 0, 00:03:47.505 "r_mbytes_per_sec": 0, 00:03:47.505 "w_mbytes_per_sec": 0 00:03:47.505 }, 00:03:47.505 "claimed": false, 00:03:47.505 "zoned": false, 00:03:47.505 "supported_io_types": { 00:03:47.505 "read": true, 00:03:47.505 "write": true, 00:03:47.505 "unmap": true, 00:03:47.505 "flush": true, 00:03:47.505 "reset": true, 00:03:47.505 "nvme_admin": false, 00:03:47.505 "nvme_io": false, 00:03:47.505 "nvme_io_md": false, 00:03:47.505 "write_zeroes": true, 00:03:47.505 "zcopy": true, 00:03:47.505 "get_zone_info": false, 00:03:47.505 "zone_management": false, 00:03:47.505 "zone_append": false, 00:03:47.505 "compare": false, 00:03:47.505 "compare_and_write": false, 00:03:47.505 "abort": true, 00:03:47.505 "seek_hole": false, 00:03:47.505 "seek_data": false, 00:03:47.505 "copy": true, 00:03:47.505 "nvme_iov_md": false 00:03:47.505 }, 00:03:47.505 "memory_domains": [ 00:03:47.505 { 00:03:47.505 "dma_device_id": "system", 00:03:47.505 "dma_device_type": 1 00:03:47.505 }, 00:03:47.505 { 00:03:47.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.505 "dma_device_type": 2 00:03:47.505 } 00:03:47.505 ], 00:03:47.505 "driver_specific": { 00:03:47.505 "passthru": { 00:03:47.505 "name": "Passthru0", 00:03:47.505 "base_bdev_name": "Malloc0" 00:03:47.505 } 00:03:47.505 } 00:03:47.505 } 00:03:47.505 ]' 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.505 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:47.505 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:47.789 10:32:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:47.789 00:03:47.790 real 0m0.243s 00:03:47.790 user 0m0.141s 00:03:47.790 sys 0m0.035s 00:03:47.790 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 ************************************ 00:03:47.790 END TEST rpc_integrity 00:03:47.790 ************************************ 00:03:47.790 10:32:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 ************************************ 00:03:47.790 START TEST rpc_plugins 00:03:47.790 ************************************ 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:47.790 { 00:03:47.790 "name": "Malloc1", 00:03:47.790 "aliases": [ 00:03:47.790 "992027cf-6af8-47b2-a47d-ca1190fa51ec" 00:03:47.790 ], 00:03:47.790 "product_name": "Malloc disk", 00:03:47.790 "block_size": 4096, 00:03:47.790 "num_blocks": 256, 00:03:47.790 "uuid": "992027cf-6af8-47b2-a47d-ca1190fa51ec", 00:03:47.790 "assigned_rate_limits": { 00:03:47.790 "rw_ios_per_sec": 0, 00:03:47.790 "rw_mbytes_per_sec": 0, 00:03:47.790 "r_mbytes_per_sec": 0, 00:03:47.790 "w_mbytes_per_sec": 0 00:03:47.790 }, 00:03:47.790 "claimed": false, 00:03:47.790 "zoned": false, 00:03:47.790 "supported_io_types": { 00:03:47.790 "read": true, 00:03:47.790 "write": true, 00:03:47.790 "unmap": true, 00:03:47.790 "flush": true, 00:03:47.790 "reset": true, 00:03:47.790 "nvme_admin": false, 00:03:47.790 "nvme_io": false, 00:03:47.790 "nvme_io_md": false, 00:03:47.790 "write_zeroes": true, 00:03:47.790 "zcopy": true, 00:03:47.790 "get_zone_info": false, 00:03:47.790 "zone_management": false, 00:03:47.790 "zone_append": false, 00:03:47.790 "compare": false, 00:03:47.790 "compare_and_write": false, 00:03:47.790 "abort": true, 00:03:47.790 "seek_hole": false, 00:03:47.790 "seek_data": false, 00:03:47.790 "copy": true, 00:03:47.790 "nvme_iov_md": false 00:03:47.790 }, 00:03:47.790 "memory_domains": [ 00:03:47.790 { 00:03:47.790 "dma_device_id": "system", 00:03:47.790 "dma_device_type": 1 00:03:47.790 }, 00:03:47.790 { 00:03:47.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:47.790 "dma_device_type": 2 00:03:47.790 } 00:03:47.790 ], 00:03:47.790 "driver_specific": {} 00:03:47.790 } 00:03:47.790 ]' 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:47.790 10:32:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:47.790 00:03:47.790 real 0m0.113s 00:03:47.790 user 0m0.063s 00:03:47.790 sys 0m0.015s 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:47.790 10:32:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:47.790 ************************************ 00:03:47.790 END TEST rpc_plugins 00:03:47.790 ************************************ 00:03:47.790 10:32:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:47.790 10:32:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.048 ************************************ 00:03:48.048 START TEST rpc_trace_cmd_test 00:03:48.048 ************************************ 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.048 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2490219", 00:03:48.048 "tpoint_group_mask": "0x8", 00:03:48.048 "iscsi_conn": { 00:03:48.048 "mask": "0x2", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "scsi": { 00:03:48.048 "mask": "0x4", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "bdev": { 00:03:48.048 "mask": "0x8", 00:03:48.048 "tpoint_mask": "0xffffffffffffffff" 00:03:48.048 }, 00:03:48.048 "nvmf_rdma": { 00:03:48.048 "mask": "0x10", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "nvmf_tcp": { 00:03:48.048 "mask": "0x20", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "ftl": { 00:03:48.048 "mask": "0x40", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "blobfs": { 00:03:48.048 "mask": "0x80", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "dsa": { 00:03:48.048 "mask": "0x200", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "thread": { 00:03:48.048 "mask": "0x400", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "nvme_pcie": { 00:03:48.048 "mask": "0x800", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "iaa": { 00:03:48.048 "mask": "0x1000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "nvme_tcp": { 00:03:48.048 "mask": "0x2000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "bdev_nvme": { 00:03:48.048 "mask": "0x4000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "sock": { 00:03:48.048 "mask": "0x8000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "blob": { 00:03:48.048 "mask": "0x10000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "bdev_raid": { 00:03:48.048 "mask": "0x20000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 }, 00:03:48.048 "scheduler": { 00:03:48.048 "mask": "0x40000", 00:03:48.048 "tpoint_mask": "0x0" 00:03:48.048 } 00:03:48.048 }' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.048 00:03:48.048 real 0m0.222s 00:03:48.048 user 0m0.188s 00:03:48.048 sys 0m0.026s 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.048 10:32:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.048 ************************************ 00:03:48.048 END TEST rpc_trace_cmd_test 00:03:48.048 ************************************ 00:03:48.048 10:32:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.048 10:32:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.048 10:32:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.048 10:32:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.048 10:32:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.048 10:32:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 ************************************ 00:03:48.307 START TEST rpc_daemon_integrity 00:03:48.307 ************************************ 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.307 { 00:03:48.307 "name": "Malloc2", 00:03:48.307 "aliases": [ 00:03:48.307 "73d08cb1-82fe-46e8-8196-35eea40b246b" 00:03:48.307 ], 00:03:48.307 "product_name": "Malloc disk", 00:03:48.307 "block_size": 512, 00:03:48.307 "num_blocks": 16384, 00:03:48.307 "uuid": "73d08cb1-82fe-46e8-8196-35eea40b246b", 00:03:48.307 "assigned_rate_limits": { 00:03:48.307 "rw_ios_per_sec": 0, 00:03:48.307 "rw_mbytes_per_sec": 0, 00:03:48.307 "r_mbytes_per_sec": 0, 00:03:48.307 "w_mbytes_per_sec": 0 00:03:48.307 }, 00:03:48.307 "claimed": false, 00:03:48.307 "zoned": false, 00:03:48.307 "supported_io_types": { 00:03:48.307 "read": true, 00:03:48.307 "write": true, 00:03:48.307 "unmap": true, 00:03:48.307 "flush": true, 00:03:48.307 "reset": true, 00:03:48.307 "nvme_admin": false, 00:03:48.307 "nvme_io": false, 00:03:48.307 "nvme_io_md": false, 00:03:48.307 "write_zeroes": true, 00:03:48.307 "zcopy": true, 00:03:48.307 "get_zone_info": false, 00:03:48.307 "zone_management": false, 00:03:48.307 "zone_append": false, 00:03:48.307 "compare": false, 00:03:48.307 "compare_and_write": false, 00:03:48.307 "abort": true, 00:03:48.307 "seek_hole": false, 00:03:48.307 "seek_data": false, 00:03:48.307 "copy": true, 00:03:48.307 "nvme_iov_md": false 00:03:48.307 }, 00:03:48.307 "memory_domains": [ 00:03:48.307 { 00:03:48.307 "dma_device_id": "system", 00:03:48.307 "dma_device_type": 1 00:03:48.307 }, 00:03:48.307 { 00:03:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.307 "dma_device_type": 2 00:03:48.307 } 00:03:48.307 ], 00:03:48.307 "driver_specific": {} 00:03:48.307 } 00:03:48.307 ]' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 [2024-11-07 10:32:15.873078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.307 [2024-11-07 10:32:15.873106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.307 [2024-11-07 10:32:15.873119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109de60 00:03:48.307 [2024-11-07 10:32:15.873126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.307 [2024-11-07 10:32:15.874240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.307 [2024-11-07 10:32:15.874261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.307 Passthru0 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.307 { 00:03:48.307 "name": "Malloc2", 00:03:48.307 "aliases": [ 00:03:48.307 "73d08cb1-82fe-46e8-8196-35eea40b246b" 00:03:48.307 ], 00:03:48.307 "product_name": "Malloc disk", 00:03:48.307 "block_size": 512, 00:03:48.307 "num_blocks": 16384, 00:03:48.307 "uuid": "73d08cb1-82fe-46e8-8196-35eea40b246b", 00:03:48.307 "assigned_rate_limits": { 00:03:48.307 "rw_ios_per_sec": 0, 00:03:48.307 "rw_mbytes_per_sec": 0, 00:03:48.307 "r_mbytes_per_sec": 0, 00:03:48.307 "w_mbytes_per_sec": 0 00:03:48.307 }, 00:03:48.307 "claimed": true, 00:03:48.307 "claim_type": "exclusive_write", 00:03:48.307 "zoned": false, 00:03:48.307 "supported_io_types": { 00:03:48.307 "read": true, 00:03:48.307 "write": true, 00:03:48.307 "unmap": true, 00:03:48.307 "flush": true, 00:03:48.307 "reset": true, 00:03:48.307 "nvme_admin": false, 00:03:48.307 "nvme_io": false, 00:03:48.307 "nvme_io_md": false, 00:03:48.307 "write_zeroes": true, 00:03:48.307 "zcopy": true, 00:03:48.307 "get_zone_info": false, 00:03:48.307 "zone_management": false, 00:03:48.307 "zone_append": false, 00:03:48.307 "compare": false, 00:03:48.307 "compare_and_write": false, 00:03:48.307 "abort": true, 00:03:48.307 "seek_hole": false, 00:03:48.307 "seek_data": false, 00:03:48.307 "copy": true, 00:03:48.307 "nvme_iov_md": false 00:03:48.307 }, 00:03:48.307 "memory_domains": [ 00:03:48.307 { 00:03:48.307 "dma_device_id": "system", 00:03:48.307 "dma_device_type": 1 00:03:48.307 }, 00:03:48.307 { 00:03:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.307 "dma_device_type": 2 00:03:48.307 } 00:03:48.307 ], 00:03:48.307 "driver_specific": {} 00:03:48.307 }, 00:03:48.307 { 00:03:48.307 "name": "Passthru0", 00:03:48.307 "aliases": [ 00:03:48.307 "97409b98-1afb-5c20-a294-1dac2f7eda4e" 00:03:48.307 ], 00:03:48.307 "product_name": "passthru", 00:03:48.307 "block_size": 512, 00:03:48.307 "num_blocks": 16384, 00:03:48.307 "uuid": "97409b98-1afb-5c20-a294-1dac2f7eda4e", 00:03:48.307 "assigned_rate_limits": { 00:03:48.307 "rw_ios_per_sec": 0, 00:03:48.307 "rw_mbytes_per_sec": 0, 00:03:48.307 "r_mbytes_per_sec": 0, 00:03:48.307 "w_mbytes_per_sec": 0 00:03:48.307 }, 00:03:48.307 "claimed": false, 00:03:48.307 "zoned": false, 00:03:48.307 "supported_io_types": { 00:03:48.307 "read": true, 00:03:48.307 "write": true, 00:03:48.307 "unmap": true, 00:03:48.307 "flush": true, 00:03:48.307 "reset": true, 00:03:48.307 "nvme_admin": false, 00:03:48.307 "nvme_io": false, 00:03:48.307 "nvme_io_md": false, 00:03:48.307 "write_zeroes": true, 00:03:48.307 "zcopy": true, 00:03:48.307 "get_zone_info": false, 00:03:48.307 "zone_management": false, 00:03:48.307 "zone_append": false, 00:03:48.307 "compare": false, 00:03:48.307 "compare_and_write": false, 00:03:48.307 "abort": true, 00:03:48.307 "seek_hole": false, 00:03:48.307 "seek_data": false, 00:03:48.307 "copy": true, 00:03:48.307 "nvme_iov_md": false 00:03:48.307 }, 00:03:48.307 "memory_domains": [ 00:03:48.307 { 00:03:48.307 "dma_device_id": "system", 00:03:48.307 "dma_device_type": 1 00:03:48.307 }, 00:03:48.307 { 00:03:48.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.307 "dma_device_type": 2 00:03:48.307 } 00:03:48.307 ], 00:03:48.307 "driver_specific": { 00:03:48.307 "passthru": { 00:03:48.307 "name": "Passthru0", 00:03:48.307 "base_bdev_name": "Malloc2" 00:03:48.307 } 00:03:48.307 } 00:03:48.307 } 00:03:48.307 ]' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.307 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:48.308 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.308 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:48.308 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.308 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.567 10:32:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.567 00:03:48.567 real 0m0.256s 00:03:48.567 user 0m0.154s 00:03:48.567 sys 0m0.034s 00:03:48.567 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.567 10:32:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.567 ************************************ 00:03:48.567 END TEST rpc_daemon_integrity 00:03:48.567 ************************************ 00:03:48.567 10:32:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.567 10:32:16 rpc -- rpc/rpc.sh@84 -- # killprocess 2490219 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@952 -- # '[' -z 2490219 ']' 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@956 -- # kill -0 2490219 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@957 -- # uname 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2490219 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2490219' 00:03:48.567 killing process with pid 2490219 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@971 -- # kill 2490219 00:03:48.567 10:32:16 rpc -- common/autotest_common.sh@976 -- # wait 2490219 00:03:48.825 00:03:48.825 real 0m1.976s 00:03:48.825 user 0m2.492s 00:03:48.825 sys 0m0.663s 00:03:48.825 10:32:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:48.825 10:32:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.825 ************************************ 00:03:48.825 END TEST rpc 00:03:48.825 ************************************ 00:03:48.825 10:32:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:48.825 10:32:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:48.825 10:32:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:48.825 10:32:16 -- common/autotest_common.sh@10 -- # set +x 00:03:48.825 ************************************ 00:03:48.825 START TEST skip_rpc 00:03:48.825 ************************************ 00:03:48.825 10:32:16 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:49.084 * Looking for test storage... 00:03:49.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.084 10:32:16 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:49.084 10:32:16 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:49.084 10:32:16 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.085 10:32:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:49.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.085 --rc genhtml_branch_coverage=1 00:03:49.085 --rc genhtml_function_coverage=1 00:03:49.085 --rc genhtml_legend=1 00:03:49.085 --rc geninfo_all_blocks=1 00:03:49.085 --rc geninfo_unexecuted_blocks=1 00:03:49.085 00:03:49.085 ' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:49.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.085 --rc genhtml_branch_coverage=1 00:03:49.085 --rc genhtml_function_coverage=1 00:03:49.085 --rc genhtml_legend=1 00:03:49.085 --rc geninfo_all_blocks=1 00:03:49.085 --rc geninfo_unexecuted_blocks=1 00:03:49.085 00:03:49.085 ' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:49.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.085 --rc genhtml_branch_coverage=1 00:03:49.085 --rc genhtml_function_coverage=1 00:03:49.085 --rc genhtml_legend=1 00:03:49.085 --rc geninfo_all_blocks=1 00:03:49.085 --rc geninfo_unexecuted_blocks=1 00:03:49.085 00:03:49.085 ' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:49.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.085 --rc genhtml_branch_coverage=1 00:03:49.085 --rc genhtml_function_coverage=1 00:03:49.085 --rc genhtml_legend=1 00:03:49.085 --rc geninfo_all_blocks=1 00:03:49.085 --rc geninfo_unexecuted_blocks=1 00:03:49.085 00:03:49.085 ' 00:03:49.085 10:32:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:49.085 10:32:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:49.085 10:32:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:49.085 10:32:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.085 ************************************ 00:03:49.085 START TEST skip_rpc 00:03:49.085 ************************************ 00:03:49.085 10:32:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:49.085 10:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2490854 00:03:49.085 10:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.085 10:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.085 10:32:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.085 [2024-11-07 10:32:16.707507] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:49.085 [2024-11-07 10:32:16.707544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490854 ] 00:03:49.343 [2024-11-07 10:32:16.769333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.343 [2024-11-07 10:32:16.809772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2490854 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2490854 ']' 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2490854 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2490854 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2490854' 00:03:54.608 killing process with pid 2490854 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2490854 00:03:54.608 10:32:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2490854 00:03:54.608 00:03:54.608 real 0m5.373s 00:03:54.608 user 0m5.137s 00:03:54.608 sys 0m0.274s 00:03:54.608 10:32:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:54.608 10:32:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.608 ************************************ 00:03:54.608 END TEST skip_rpc 00:03:54.608 ************************************ 00:03:54.608 10:32:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:54.608 10:32:22 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:54.608 10:32:22 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:54.608 10:32:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.608 ************************************ 00:03:54.608 START TEST skip_rpc_with_json 00:03:54.608 ************************************ 00:03:54.608 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2491782 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2491782 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2491782 ']' 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:54.609 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.609 [2024-11-07 10:32:22.150248] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:03:54.609 [2024-11-07 10:32:22.150292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491782 ] 00:03:54.609 [2024-11-07 10:32:22.211006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.609 [2024-11-07 10:32:22.252968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.867 [2024-11-07 10:32:22.462015] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:54.867 request: 00:03:54.867 { 00:03:54.867 "trtype": "tcp", 00:03:54.867 "method": "nvmf_get_transports", 00:03:54.867 "req_id": 1 00:03:54.867 } 00:03:54.867 Got JSON-RPC error response 00:03:54.867 response: 00:03:54.867 { 00:03:54.867 "code": -19, 00:03:54.867 "message": "No such device" 00:03:54.867 } 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:54.867 [2024-11-07 10:32:22.474117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:54.867 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.126 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:55.126 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.126 { 00:03:55.126 "subsystems": [ 00:03:55.126 { 00:03:55.126 "subsystem": "fsdev", 00:03:55.126 "config": [ 00:03:55.126 { 00:03:55.126 "method": "fsdev_set_opts", 00:03:55.126 "params": { 00:03:55.126 "fsdev_io_pool_size": 65535, 00:03:55.126 "fsdev_io_cache_size": 256 00:03:55.126 } 00:03:55.126 } 00:03:55.126 ] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "vfio_user_target", 00:03:55.126 "config": null 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "keyring", 00:03:55.126 "config": [] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "iobuf", 00:03:55.126 "config": [ 00:03:55.126 { 00:03:55.126 "method": "iobuf_set_options", 00:03:55.126 "params": { 00:03:55.126 "small_pool_count": 8192, 00:03:55.126 "large_pool_count": 1024, 00:03:55.126 "small_bufsize": 8192, 00:03:55.126 "large_bufsize": 135168, 00:03:55.126 "enable_numa": false 00:03:55.126 } 00:03:55.126 } 00:03:55.126 ] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "sock", 00:03:55.126 "config": [ 00:03:55.126 { 00:03:55.126 "method": "sock_set_default_impl", 00:03:55.126 "params": { 00:03:55.126 "impl_name": "posix" 00:03:55.126 } 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "method": "sock_impl_set_options", 00:03:55.126 "params": { 00:03:55.126 "impl_name": "ssl", 00:03:55.126 "recv_buf_size": 4096, 00:03:55.126 "send_buf_size": 4096, 00:03:55.126 "enable_recv_pipe": true, 00:03:55.126 "enable_quickack": false, 00:03:55.126 "enable_placement_id": 0, 00:03:55.126 "enable_zerocopy_send_server": true, 00:03:55.126 "enable_zerocopy_send_client": false, 00:03:55.126 "zerocopy_threshold": 0, 00:03:55.126 "tls_version": 0, 00:03:55.126 "enable_ktls": false 00:03:55.126 } 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "method": "sock_impl_set_options", 00:03:55.126 "params": { 00:03:55.126 "impl_name": "posix", 00:03:55.126 "recv_buf_size": 2097152, 00:03:55.126 "send_buf_size": 2097152, 00:03:55.126 "enable_recv_pipe": true, 00:03:55.126 "enable_quickack": false, 00:03:55.126 "enable_placement_id": 0, 00:03:55.126 "enable_zerocopy_send_server": true, 00:03:55.126 "enable_zerocopy_send_client": false, 00:03:55.126 "zerocopy_threshold": 0, 00:03:55.126 "tls_version": 0, 00:03:55.126 "enable_ktls": false 00:03:55.126 } 00:03:55.126 } 00:03:55.126 ] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "vmd", 00:03:55.126 "config": [] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "accel", 00:03:55.126 "config": [ 00:03:55.126 { 00:03:55.126 "method": "accel_set_options", 00:03:55.126 "params": { 00:03:55.126 "small_cache_size": 128, 00:03:55.126 "large_cache_size": 16, 00:03:55.126 "task_count": 2048, 00:03:55.126 "sequence_count": 2048, 00:03:55.126 "buf_count": 2048 00:03:55.126 } 00:03:55.126 } 00:03:55.126 ] 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "subsystem": "bdev", 00:03:55.126 "config": [ 00:03:55.126 { 00:03:55.126 "method": "bdev_set_options", 00:03:55.126 "params": { 00:03:55.126 "bdev_io_pool_size": 65535, 00:03:55.126 "bdev_io_cache_size": 256, 00:03:55.126 "bdev_auto_examine": true, 00:03:55.126 "iobuf_small_cache_size": 128, 00:03:55.126 "iobuf_large_cache_size": 16 00:03:55.126 } 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "method": "bdev_raid_set_options", 00:03:55.126 "params": { 00:03:55.126 "process_window_size_kb": 1024, 00:03:55.126 "process_max_bandwidth_mb_sec": 0 00:03:55.126 } 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "method": "bdev_iscsi_set_options", 00:03:55.126 "params": { 00:03:55.126 "timeout_sec": 30 00:03:55.126 } 00:03:55.126 }, 00:03:55.126 { 00:03:55.126 "method": "bdev_nvme_set_options", 00:03:55.126 "params": { 00:03:55.126 "action_on_timeout": "none", 00:03:55.126 "timeout_us": 0, 00:03:55.126 "timeout_admin_us": 0, 00:03:55.126 "keep_alive_timeout_ms": 10000, 00:03:55.126 "arbitration_burst": 0, 00:03:55.126 "low_priority_weight": 0, 00:03:55.126 "medium_priority_weight": 0, 00:03:55.126 "high_priority_weight": 0, 00:03:55.126 "nvme_adminq_poll_period_us": 10000, 00:03:55.126 "nvme_ioq_poll_period_us": 0, 00:03:55.126 "io_queue_requests": 0, 00:03:55.127 "delay_cmd_submit": true, 00:03:55.127 "transport_retry_count": 4, 00:03:55.127 "bdev_retry_count": 3, 00:03:55.127 "transport_ack_timeout": 0, 00:03:55.127 "ctrlr_loss_timeout_sec": 0, 00:03:55.127 "reconnect_delay_sec": 0, 00:03:55.127 "fast_io_fail_timeout_sec": 0, 00:03:55.127 "disable_auto_failback": false, 00:03:55.127 "generate_uuids": false, 00:03:55.127 "transport_tos": 0, 00:03:55.127 "nvme_error_stat": false, 00:03:55.127 "rdma_srq_size": 0, 00:03:55.127 "io_path_stat": false, 00:03:55.127 "allow_accel_sequence": false, 00:03:55.127 "rdma_max_cq_size": 0, 00:03:55.127 "rdma_cm_event_timeout_ms": 0, 00:03:55.127 "dhchap_digests": [ 00:03:55.127 "sha256", 00:03:55.127 "sha384", 00:03:55.127 "sha512" 00:03:55.127 ], 00:03:55.127 "dhchap_dhgroups": [ 00:03:55.127 "null", 00:03:55.127 "ffdhe2048", 00:03:55.127 "ffdhe3072", 00:03:55.127 "ffdhe4096", 00:03:55.127 "ffdhe6144", 00:03:55.127 "ffdhe8192" 00:03:55.127 ] 00:03:55.127 } 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "method": "bdev_nvme_set_hotplug", 00:03:55.127 "params": { 00:03:55.127 "period_us": 100000, 00:03:55.127 "enable": false 00:03:55.127 } 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "method": "bdev_wait_for_examine" 00:03:55.127 } 00:03:55.127 ] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "scsi", 00:03:55.127 "config": null 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "scheduler", 00:03:55.127 "config": [ 00:03:55.127 { 00:03:55.127 "method": "framework_set_scheduler", 00:03:55.127 "params": { 00:03:55.127 "name": "static" 00:03:55.127 } 00:03:55.127 } 00:03:55.127 ] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "vhost_scsi", 00:03:55.127 "config": [] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "vhost_blk", 00:03:55.127 "config": [] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "ublk", 00:03:55.127 "config": [] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "nbd", 00:03:55.127 "config": [] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "nvmf", 00:03:55.127 "config": [ 00:03:55.127 { 00:03:55.127 "method": "nvmf_set_config", 00:03:55.127 "params": { 00:03:55.127 "discovery_filter": "match_any", 00:03:55.127 "admin_cmd_passthru": { 00:03:55.127 "identify_ctrlr": false 00:03:55.127 }, 00:03:55.127 "dhchap_digests": [ 00:03:55.127 "sha256", 00:03:55.127 "sha384", 00:03:55.127 "sha512" 00:03:55.127 ], 00:03:55.127 "dhchap_dhgroups": [ 00:03:55.127 "null", 00:03:55.127 "ffdhe2048", 00:03:55.127 "ffdhe3072", 00:03:55.127 "ffdhe4096", 00:03:55.127 "ffdhe6144", 00:03:55.127 "ffdhe8192" 00:03:55.127 ] 00:03:55.127 } 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "method": "nvmf_set_max_subsystems", 00:03:55.127 "params": { 00:03:55.127 "max_subsystems": 1024 00:03:55.127 } 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "method": "nvmf_set_crdt", 00:03:55.127 "params": { 00:03:55.127 "crdt1": 0, 00:03:55.127 "crdt2": 0, 00:03:55.127 "crdt3": 0 00:03:55.127 } 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "method": "nvmf_create_transport", 00:03:55.127 "params": { 00:03:55.127 "trtype": "TCP", 00:03:55.127 "max_queue_depth": 128, 00:03:55.127 "max_io_qpairs_per_ctrlr": 127, 00:03:55.127 "in_capsule_data_size": 4096, 00:03:55.127 "max_io_size": 131072, 00:03:55.127 "io_unit_size": 131072, 00:03:55.127 "max_aq_depth": 128, 00:03:55.127 "num_shared_buffers": 511, 00:03:55.127 "buf_cache_size": 4294967295, 00:03:55.127 "dif_insert_or_strip": false, 00:03:55.127 "zcopy": false, 00:03:55.127 "c2h_success": true, 00:03:55.127 "sock_priority": 0, 00:03:55.127 "abort_timeout_sec": 1, 00:03:55.127 "ack_timeout": 0, 00:03:55.127 "data_wr_pool_size": 0 00:03:55.127 } 00:03:55.127 } 00:03:55.127 ] 00:03:55.127 }, 00:03:55.127 { 00:03:55.127 "subsystem": "iscsi", 00:03:55.127 "config": [ 00:03:55.127 { 00:03:55.127 "method": "iscsi_set_options", 00:03:55.127 "params": { 00:03:55.127 "node_base": "iqn.2016-06.io.spdk", 00:03:55.127 "max_sessions": 128, 00:03:55.127 "max_connections_per_session": 2, 00:03:55.127 "max_queue_depth": 64, 00:03:55.127 "default_time2wait": 2, 00:03:55.127 "default_time2retain": 20, 00:03:55.127 "first_burst_length": 8192, 00:03:55.127 "immediate_data": true, 00:03:55.127 "allow_duplicated_isid": false, 00:03:55.127 "error_recovery_level": 0, 00:03:55.127 "nop_timeout": 60, 00:03:55.127 "nop_in_interval": 30, 00:03:55.127 "disable_chap": false, 00:03:55.127 "require_chap": false, 00:03:55.127 "mutual_chap": false, 00:03:55.127 "chap_group": 0, 00:03:55.127 "max_large_datain_per_connection": 64, 00:03:55.127 "max_r2t_per_connection": 4, 00:03:55.127 "pdu_pool_size": 36864, 00:03:55.127 "immediate_data_pool_size": 16384, 00:03:55.127 "data_out_pool_size": 2048 00:03:55.127 } 00:03:55.127 } 00:03:55.127 ] 00:03:55.127 } 00:03:55.127 ] 00:03:55.127 } 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2491782 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2491782 ']' 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2491782 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2491782 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2491782' 00:03:55.127 killing process with pid 2491782 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2491782 00:03:55.127 10:32:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2491782 00:03:55.423 10:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2491818 00:03:55.423 10:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.423 10:32:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2491818 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2491818 ']' 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2491818 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2491818 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2491818' 00:04:00.691 killing process with pid 2491818 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2491818 00:04:00.691 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2491818 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:00.950 00:04:00.950 real 0m6.269s 00:04:00.950 user 0m5.993s 00:04:00.950 sys 0m0.580s 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 ************************************ 00:04:00.950 END TEST skip_rpc_with_json 00:04:00.950 ************************************ 00:04:00.950 10:32:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 ************************************ 00:04:00.950 START TEST skip_rpc_with_delay 00:04:00.950 ************************************ 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:00.950 [2024-11-07 10:32:28.493029] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:00.950 00:04:00.950 real 0m0.069s 00:04:00.950 user 0m0.043s 00:04:00.950 sys 0m0.025s 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.950 10:32:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 ************************************ 00:04:00.950 END TEST skip_rpc_with_delay 00:04:00.950 ************************************ 00:04:00.950 10:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:00.950 10:32:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:00.950 10:32:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.950 10:32:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.950 ************************************ 00:04:00.950 START TEST exit_on_failed_rpc_init 00:04:00.950 ************************************ 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2492790 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2492790 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2492790 ']' 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:00.950 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.209 [2024-11-07 10:32:28.632093] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:01.209 [2024-11-07 10:32:28.632146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492790 ] 00:04:01.209 [2024-11-07 10:32:28.696645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.209 [2024-11-07 10:32:28.741347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:01.467 10:32:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:01.467 [2024-11-07 10:32:29.019872] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:01.467 [2024-11-07 10:32:29.019917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493011 ] 00:04:01.467 [2024-11-07 10:32:29.080599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.467 [2024-11-07 10:32:29.121900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.467 [2024-11-07 10:32:29.121960] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:01.467 [2024-11-07 10:32:29.121970] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:01.467 [2024-11-07 10:32:29.121979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2492790 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2492790 ']' 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2492790 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2492790 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:01.725 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2492790' 00:04:01.725 killing process with pid 2492790 00:04:01.726 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2492790 00:04:01.726 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2492790 00:04:01.984 00:04:01.984 real 0m0.937s 00:04:01.984 user 0m1.005s 00:04:01.984 sys 0m0.362s 00:04:01.984 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.984 10:32:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:01.984 ************************************ 00:04:01.984 END TEST exit_on_failed_rpc_init 00:04:01.984 ************************************ 00:04:01.984 10:32:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.984 00:04:01.984 real 0m13.105s 00:04:01.984 user 0m12.383s 00:04:01.984 sys 0m1.525s 00:04:01.984 10:32:29 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.984 10:32:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.984 ************************************ 00:04:01.984 END TEST skip_rpc 00:04:01.984 ************************************ 00:04:01.984 10:32:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:01.984 10:32:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.984 10:32:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.984 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:01.984 ************************************ 00:04:01.984 START TEST rpc_client 00:04:01.984 ************************************ 00:04:01.984 10:32:29 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:02.242 * Looking for test storage... 00:04:02.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.243 10:32:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.243 --rc genhtml_branch_coverage=1 00:04:02.243 --rc genhtml_function_coverage=1 00:04:02.243 --rc genhtml_legend=1 00:04:02.243 --rc geninfo_all_blocks=1 00:04:02.243 --rc geninfo_unexecuted_blocks=1 00:04:02.243 00:04:02.243 ' 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.243 --rc genhtml_branch_coverage=1 00:04:02.243 --rc genhtml_function_coverage=1 00:04:02.243 --rc genhtml_legend=1 00:04:02.243 --rc geninfo_all_blocks=1 00:04:02.243 --rc geninfo_unexecuted_blocks=1 00:04:02.243 00:04:02.243 ' 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.243 --rc genhtml_branch_coverage=1 00:04:02.243 --rc genhtml_function_coverage=1 00:04:02.243 --rc genhtml_legend=1 00:04:02.243 --rc geninfo_all_blocks=1 00:04:02.243 --rc geninfo_unexecuted_blocks=1 00:04:02.243 00:04:02.243 ' 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:02.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.243 --rc genhtml_branch_coverage=1 00:04:02.243 --rc genhtml_function_coverage=1 00:04:02.243 --rc genhtml_legend=1 00:04:02.243 --rc geninfo_all_blocks=1 00:04:02.243 --rc geninfo_unexecuted_blocks=1 00:04:02.243 00:04:02.243 ' 00:04:02.243 10:32:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:02.243 OK 00:04:02.243 10:32:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:02.243 00:04:02.243 real 0m0.195s 00:04:02.243 user 0m0.124s 00:04:02.243 sys 0m0.084s 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.243 10:32:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:02.243 ************************************ 00:04:02.243 END TEST rpc_client 00:04:02.243 ************************************ 00:04:02.243 10:32:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.243 10:32:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.243 10:32:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.243 10:32:29 -- common/autotest_common.sh@10 -- # set +x 00:04:02.243 ************************************ 00:04:02.243 START TEST json_config 00:04:02.243 ************************************ 00:04:02.243 10:32:29 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:02.502 10:32:29 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:02.502 10:32:29 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:02.502 10:32:29 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.502 10:32:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.502 10:32:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.502 10:32:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.502 10:32:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.502 10:32:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.502 10:32:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:02.502 10:32:30 json_config -- scripts/common.sh@345 -- # : 1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.502 10:32:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.502 10:32:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@353 -- # local d=1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.502 10:32:30 json_config -- scripts/common.sh@355 -- # echo 1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.502 10:32:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@353 -- # local d=2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.502 10:32:30 json_config -- scripts/common.sh@355 -- # echo 2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.502 10:32:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.502 10:32:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.502 10:32:30 json_config -- scripts/common.sh@368 -- # return 0 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:02.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.502 --rc genhtml_branch_coverage=1 00:04:02.502 --rc genhtml_function_coverage=1 00:04:02.502 --rc genhtml_legend=1 00:04:02.502 --rc geninfo_all_blocks=1 00:04:02.502 --rc geninfo_unexecuted_blocks=1 00:04:02.502 00:04:02.502 ' 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:02.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.502 --rc genhtml_branch_coverage=1 00:04:02.502 --rc genhtml_function_coverage=1 00:04:02.502 --rc genhtml_legend=1 00:04:02.502 --rc geninfo_all_blocks=1 00:04:02.502 --rc geninfo_unexecuted_blocks=1 00:04:02.502 00:04:02.502 ' 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:02.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.502 --rc genhtml_branch_coverage=1 00:04:02.502 --rc genhtml_function_coverage=1 00:04:02.502 --rc genhtml_legend=1 00:04:02.502 --rc geninfo_all_blocks=1 00:04:02.502 --rc geninfo_unexecuted_blocks=1 00:04:02.502 00:04:02.502 ' 00:04:02.502 10:32:30 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:02.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.502 --rc genhtml_branch_coverage=1 00:04:02.502 --rc genhtml_function_coverage=1 00:04:02.502 --rc genhtml_legend=1 00:04:02.503 --rc geninfo_all_blocks=1 00:04:02.503 --rc geninfo_unexecuted_blocks=1 00:04:02.503 00:04:02.503 ' 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:02.503 10:32:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:02.503 10:32:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:02.503 10:32:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:02.503 10:32:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:02.503 10:32:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.503 10:32:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.503 10:32:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.503 10:32:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:02.503 10:32:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@51 -- # : 0 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:02.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:02.503 10:32:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:02.503 INFO: JSON configuration test init 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.503 10:32:30 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:02.503 10:32:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:02.503 10:32:30 json_config -- json_config/common.sh@10 -- # shift 00:04:02.503 10:32:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.503 10:32:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.503 10:32:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.503 10:32:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.503 10:32:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.503 10:32:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2493196 00:04:02.503 10:32:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.503 Waiting for target to run... 00:04:02.503 10:32:30 json_config -- json_config/common.sh@25 -- # waitforlisten 2493196 /var/tmp/spdk_tgt.sock 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@833 -- # '[' -z 2493196 ']' 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:02.503 10:32:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:02.503 10:32:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.503 [2024-11-07 10:32:30.128510] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:02.503 [2024-11-07 10:32:30.128560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2493196 ] 00:04:03.069 [2024-11-07 10:32:30.568320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.069 [2024-11-07 10:32:30.620732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:03.327 10:32:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.327 00:04:03.327 10:32:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:03.327 10:32:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.327 10:32:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:03.327 10:32:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.327 10:32:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.585 10:32:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:03.585 10:32:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:03.585 10:32:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:06.869 10:32:34 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:06.869 10:32:34 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:06.869 10:32:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.869 10:32:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.869 10:32:34 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:06.869 10:32:34 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:06.870 10:32:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@54 -- # sort 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:06.870 10:32:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:06.870 10:32:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:06.870 10:32:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:06.870 10:32:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:06.870 10:32:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.870 10:32:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:06.870 MallocForNvmf0 00:04:07.128 10:32:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.128 10:32:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:07.128 MallocForNvmf1 00:04:07.128 10:32:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.128 10:32:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:07.386 [2024-11-07 10:32:34.902291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.386 10:32:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.386 10:32:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:07.644 10:32:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.644 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:07.644 10:32:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.644 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:07.901 10:32:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:07.901 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:08.159 [2024-11-07 10:32:35.636671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:08.159 10:32:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:08.159 10:32:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.159 10:32:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.159 10:32:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:08.159 10:32:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.159 10:32:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.159 10:32:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:08.159 10:32:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.159 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.417 MallocBdevForConfigChangeCheck 00:04:08.417 10:32:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:08.417 10:32:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:08.417 10:32:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.417 10:32:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:08.417 10:32:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.674 10:32:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:08.674 INFO: shutting down applications... 00:04:08.674 10:32:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:08.674 10:32:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:08.674 10:32:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:08.674 10:32:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:10.571 Calling clear_iscsi_subsystem 00:04:10.571 Calling clear_nvmf_subsystem 00:04:10.571 Calling clear_nbd_subsystem 00:04:10.571 Calling clear_ublk_subsystem 00:04:10.571 Calling clear_vhost_blk_subsystem 00:04:10.571 Calling clear_vhost_scsi_subsystem 00:04:10.571 Calling clear_bdev_subsystem 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:10.571 10:32:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:10.830 10:32:38 json_config -- json_config/json_config.sh@352 -- # break 00:04:10.830 10:32:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:10.830 10:32:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:10.830 10:32:38 json_config -- json_config/common.sh@31 -- # local app=target 00:04:10.830 10:32:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:10.830 10:32:38 json_config -- json_config/common.sh@35 -- # [[ -n 2493196 ]] 00:04:10.830 10:32:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2493196 00:04:10.830 10:32:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:10.830 10:32:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.830 10:32:38 json_config -- json_config/common.sh@41 -- # kill -0 2493196 00:04:10.830 10:32:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:11.398 10:32:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:11.398 10:32:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:11.398 10:32:38 json_config -- json_config/common.sh@41 -- # kill -0 2493196 00:04:11.398 10:32:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:11.398 10:32:38 json_config -- json_config/common.sh@43 -- # break 00:04:11.398 10:32:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:11.398 10:32:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:11.398 SPDK target shutdown done 00:04:11.398 10:32:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:11.398 INFO: relaunching applications... 00:04:11.398 10:32:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.398 10:32:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:11.398 10:32:38 json_config -- json_config/common.sh@10 -- # shift 00:04:11.398 10:32:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:11.398 10:32:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:11.398 10:32:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:11.398 10:32:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.398 10:32:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:11.398 10:32:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2494881 00:04:11.398 10:32:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:11.398 Waiting for target to run... 00:04:11.398 10:32:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.398 10:32:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2494881 /var/tmp/spdk_tgt.sock 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 2494881 ']' 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:11.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:11.398 10:32:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.398 [2024-11-07 10:32:38.819111] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:11.398 [2024-11-07 10:32:38.819169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494881 ] 00:04:11.657 [2024-11-07 10:32:39.266143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.657 [2024-11-07 10:32:39.323345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.941 [2024-11-07 10:32:42.354016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.941 [2024-11-07 10:32:42.386374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:15.507 10:32:43 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:15.507 10:32:43 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:15.507 10:32:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:15.507 00:04:15.507 10:32:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:15.507 10:32:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:15.507 INFO: Checking if target configuration is the same... 00:04:15.507 10:32:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.507 10:32:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:15.507 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.507 + '[' 2 -ne 2 ']' 00:04:15.507 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:15.507 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:15.507 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:15.507 +++ basename /dev/fd/62 00:04:15.507 ++ mktemp /tmp/62.XXX 00:04:15.507 + tmp_file_1=/tmp/62.3XZ 00:04:15.507 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:15.507 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:15.507 + tmp_file_2=/tmp/spdk_tgt_config.json.0rw 00:04:15.507 + ret=0 00:04:15.507 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.766 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:15.766 + diff -u /tmp/62.3XZ /tmp/spdk_tgt_config.json.0rw 00:04:15.766 + echo 'INFO: JSON config files are the same' 00:04:15.766 INFO: JSON config files are the same 00:04:15.766 + rm /tmp/62.3XZ /tmp/spdk_tgt_config.json.0rw 00:04:15.766 + exit 0 00:04:15.766 10:32:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:15.766 10:32:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:15.766 INFO: changing configuration and checking if this can be detected... 00:04:15.766 10:32:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:15.766 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:16.024 10:32:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.024 10:32:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:16.024 10:32:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:16.024 + '[' 2 -ne 2 ']' 00:04:16.024 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:16.024 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:16.024 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:16.024 +++ basename /dev/fd/62 00:04:16.024 ++ mktemp /tmp/62.XXX 00:04:16.024 + tmp_file_1=/tmp/62.JZr 00:04:16.024 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.024 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:16.024 + tmp_file_2=/tmp/spdk_tgt_config.json.e7N 00:04:16.024 + ret=0 00:04:16.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.282 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:16.591 + diff -u /tmp/62.JZr /tmp/spdk_tgt_config.json.e7N 00:04:16.591 + ret=1 00:04:16.591 + echo '=== Start of file: /tmp/62.JZr ===' 00:04:16.591 + cat /tmp/62.JZr 00:04:16.591 + echo '=== End of file: /tmp/62.JZr ===' 00:04:16.591 + echo '' 00:04:16.591 + echo '=== Start of file: /tmp/spdk_tgt_config.json.e7N ===' 00:04:16.591 + cat /tmp/spdk_tgt_config.json.e7N 00:04:16.591 + echo '=== End of file: /tmp/spdk_tgt_config.json.e7N ===' 00:04:16.591 + echo '' 00:04:16.591 + rm /tmp/62.JZr /tmp/spdk_tgt_config.json.e7N 00:04:16.591 + exit 1 00:04:16.591 10:32:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:16.591 INFO: configuration change detected. 00:04:16.591 10:32:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:16.591 10:32:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:16.591 10:32:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 2494881 ]] 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.591 10:32:44 json_config -- json_config/json_config.sh@330 -- # killprocess 2494881 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@952 -- # '[' -z 2494881 ']' 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@956 -- # kill -0 2494881 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@957 -- # uname 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2494881 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2494881' 00:04:16.591 killing process with pid 2494881 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@971 -- # kill 2494881 00:04:16.591 10:32:44 json_config -- common/autotest_common.sh@976 -- # wait 2494881 00:04:18.049 10:32:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:18.049 10:32:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:18.049 10:32:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.049 10:32:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.049 10:32:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:18.049 10:32:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:18.049 INFO: Success 00:04:18.049 00:04:18.049 real 0m15.713s 00:04:18.049 user 0m16.064s 00:04:18.049 sys 0m2.685s 00:04:18.049 10:32:45 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:18.049 10:32:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.049 ************************************ 00:04:18.049 END TEST json_config 00:04:18.049 ************************************ 00:04:18.049 10:32:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.049 10:32:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:18.049 10:32:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:18.049 10:32:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.049 ************************************ 00:04:18.049 START TEST json_config_extra_key 00:04:18.049 ************************************ 00:04:18.049 10:32:45 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:18.308 10:32:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.308 10:32:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.308 10:32:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.308 10:32:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.308 10:32:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.308 10:32:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:18.309 10:32:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.309 10:32:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.309 --rc genhtml_branch_coverage=1 00:04:18.309 --rc genhtml_function_coverage=1 00:04:18.309 --rc genhtml_legend=1 00:04:18.309 --rc geninfo_all_blocks=1 00:04:18.309 --rc geninfo_unexecuted_blocks=1 00:04:18.309 00:04:18.309 ' 00:04:18.309 10:32:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.309 --rc genhtml_branch_coverage=1 00:04:18.309 --rc genhtml_function_coverage=1 00:04:18.309 --rc genhtml_legend=1 00:04:18.309 --rc geninfo_all_blocks=1 00:04:18.309 --rc geninfo_unexecuted_blocks=1 00:04:18.309 00:04:18.309 ' 00:04:18.309 10:32:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.309 --rc genhtml_branch_coverage=1 00:04:18.309 --rc genhtml_function_coverage=1 00:04:18.309 --rc genhtml_legend=1 00:04:18.309 --rc geninfo_all_blocks=1 00:04:18.309 --rc geninfo_unexecuted_blocks=1 00:04:18.309 00:04:18.309 ' 00:04:18.309 10:32:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.309 --rc genhtml_branch_coverage=1 00:04:18.309 --rc genhtml_function_coverage=1 00:04:18.309 --rc genhtml_legend=1 00:04:18.309 --rc geninfo_all_blocks=1 00:04:18.309 --rc geninfo_unexecuted_blocks=1 00:04:18.309 00:04:18.309 ' 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.309 10:32:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.309 10:32:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.309 10:32:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.309 10:32:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.309 10:32:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:18.309 10:32:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.309 10:32:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:18.309 INFO: launching applications... 00:04:18.309 10:32:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2496169 00:04:18.309 10:32:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.309 Waiting for target to run... 00:04:18.310 10:32:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2496169 /var/tmp/spdk_tgt.sock 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2496169 ']' 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.310 10:32:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:18.310 10:32:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:18.310 [2024-11-07 10:32:45.880846] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:18.310 [2024-11-07 10:32:45.880895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496169 ] 00:04:18.568 [2024-11-07 10:32:46.162759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.568 [2024-11-07 10:32:46.196696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.135 10:32:46 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.135 10:32:46 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:19.135 00:04:19.135 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:19.135 INFO: shutting down applications... 00:04:19.135 10:32:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2496169 ]] 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2496169 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2496169 00:04:19.135 10:32:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:19.702 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:19.702 10:32:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:19.702 10:32:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2496169 00:04:19.703 10:32:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:19.703 10:32:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:19.703 10:32:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:19.703 10:32:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:19.703 SPDK target shutdown done 00:04:19.703 10:32:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:19.703 Success 00:04:19.703 00:04:19.703 real 0m1.544s 00:04:19.703 user 0m1.340s 00:04:19.703 sys 0m0.378s 00:04:19.703 10:32:47 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:19.703 10:32:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:19.703 ************************************ 00:04:19.703 END TEST json_config_extra_key 00:04:19.703 ************************************ 00:04:19.703 10:32:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:19.703 10:32:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:19.703 10:32:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:19.703 10:32:47 -- common/autotest_common.sh@10 -- # set +x 00:04:19.703 ************************************ 00:04:19.703 START TEST alias_rpc 00:04:19.703 ************************************ 00:04:19.703 10:32:47 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:19.703 * Looking for test storage... 00:04:19.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:19.703 10:32:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:19.703 10:32:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:19.703 10:32:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.962 10:32:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:19.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.962 --rc genhtml_branch_coverage=1 00:04:19.962 --rc genhtml_function_coverage=1 00:04:19.962 --rc genhtml_legend=1 00:04:19.962 --rc geninfo_all_blocks=1 00:04:19.962 --rc geninfo_unexecuted_blocks=1 00:04:19.962 00:04:19.962 ' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:19.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.962 --rc genhtml_branch_coverage=1 00:04:19.962 --rc genhtml_function_coverage=1 00:04:19.962 --rc genhtml_legend=1 00:04:19.962 --rc geninfo_all_blocks=1 00:04:19.962 --rc geninfo_unexecuted_blocks=1 00:04:19.962 00:04:19.962 ' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:19.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.962 --rc genhtml_branch_coverage=1 00:04:19.962 --rc genhtml_function_coverage=1 00:04:19.962 --rc genhtml_legend=1 00:04:19.962 --rc geninfo_all_blocks=1 00:04:19.962 --rc geninfo_unexecuted_blocks=1 00:04:19.962 00:04:19.962 ' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:19.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.962 --rc genhtml_branch_coverage=1 00:04:19.962 --rc genhtml_function_coverage=1 00:04:19.962 --rc genhtml_legend=1 00:04:19.962 --rc geninfo_all_blocks=1 00:04:19.962 --rc geninfo_unexecuted_blocks=1 00:04:19.962 00:04:19.962 ' 00:04:19.962 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:19.962 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.962 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2496464 00:04:19.962 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2496464 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2496464 ']' 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:19.962 10:32:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.962 [2024-11-07 10:32:47.486025] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:19.962 [2024-11-07 10:32:47.486070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496464 ] 00:04:19.962 [2024-11-07 10:32:47.548335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.962 [2024-11-07 10:32:47.588151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.220 10:32:47 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:20.221 10:32:47 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:20.221 10:32:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:20.479 10:32:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2496464 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2496464 ']' 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2496464 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2496464 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2496464' 00:04:20.479 killing process with pid 2496464 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@971 -- # kill 2496464 00:04:20.479 10:32:48 alias_rpc -- common/autotest_common.sh@976 -- # wait 2496464 00:04:20.737 00:04:20.737 real 0m1.082s 00:04:20.737 user 0m1.112s 00:04:20.737 sys 0m0.396s 00:04:20.737 10:32:48 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:20.737 10:32:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.737 ************************************ 00:04:20.737 END TEST alias_rpc 00:04:20.737 ************************************ 00:04:20.737 10:32:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:20.737 10:32:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:20.737 10:32:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.737 10:32:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.737 10:32:48 -- common/autotest_common.sh@10 -- # set +x 00:04:20.996 ************************************ 00:04:20.996 START TEST spdkcli_tcp 00:04:20.996 ************************************ 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:20.996 * Looking for test storage... 00:04:20.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.996 10:32:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:20.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.996 --rc genhtml_branch_coverage=1 00:04:20.996 --rc genhtml_function_coverage=1 00:04:20.996 --rc genhtml_legend=1 00:04:20.996 --rc geninfo_all_blocks=1 00:04:20.996 --rc geninfo_unexecuted_blocks=1 00:04:20.996 00:04:20.996 ' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:20.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.996 --rc genhtml_branch_coverage=1 00:04:20.996 --rc genhtml_function_coverage=1 00:04:20.996 --rc genhtml_legend=1 00:04:20.996 --rc geninfo_all_blocks=1 00:04:20.996 --rc geninfo_unexecuted_blocks=1 00:04:20.996 00:04:20.996 ' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:20.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.996 --rc genhtml_branch_coverage=1 00:04:20.996 --rc genhtml_function_coverage=1 00:04:20.996 --rc genhtml_legend=1 00:04:20.996 --rc geninfo_all_blocks=1 00:04:20.996 --rc geninfo_unexecuted_blocks=1 00:04:20.996 00:04:20.996 ' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:20.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.996 --rc genhtml_branch_coverage=1 00:04:20.996 --rc genhtml_function_coverage=1 00:04:20.996 --rc genhtml_legend=1 00:04:20.996 --rc geninfo_all_blocks=1 00:04:20.996 --rc geninfo_unexecuted_blocks=1 00:04:20.996 00:04:20.996 ' 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2496751 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2496751 00:04:20.996 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2496751 ']' 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:20.996 10:32:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:20.996 [2024-11-07 10:32:48.641376] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:20.997 [2024-11-07 10:32:48.641427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2496751 ] 00:04:21.255 [2024-11-07 10:32:48.704097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.255 [2024-11-07 10:32:48.748185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.255 [2024-11-07 10:32:48.748189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.515 10:32:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:21.515 10:32:48 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:21.515 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2496759 00:04:21.515 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:21.515 10:32:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:21.515 [ 00:04:21.515 "bdev_malloc_delete", 00:04:21.515 "bdev_malloc_create", 00:04:21.515 "bdev_null_resize", 00:04:21.515 "bdev_null_delete", 00:04:21.515 "bdev_null_create", 00:04:21.515 "bdev_nvme_cuse_unregister", 00:04:21.515 "bdev_nvme_cuse_register", 00:04:21.515 "bdev_opal_new_user", 00:04:21.515 "bdev_opal_set_lock_state", 00:04:21.515 "bdev_opal_delete", 00:04:21.515 "bdev_opal_get_info", 00:04:21.515 "bdev_opal_create", 00:04:21.515 "bdev_nvme_opal_revert", 00:04:21.515 "bdev_nvme_opal_init", 00:04:21.515 "bdev_nvme_send_cmd", 00:04:21.515 "bdev_nvme_set_keys", 00:04:21.515 "bdev_nvme_get_path_iostat", 00:04:21.515 "bdev_nvme_get_mdns_discovery_info", 00:04:21.515 "bdev_nvme_stop_mdns_discovery", 00:04:21.515 "bdev_nvme_start_mdns_discovery", 00:04:21.515 "bdev_nvme_set_multipath_policy", 00:04:21.515 "bdev_nvme_set_preferred_path", 00:04:21.515 "bdev_nvme_get_io_paths", 00:04:21.515 "bdev_nvme_remove_error_injection", 00:04:21.515 "bdev_nvme_add_error_injection", 00:04:21.515 "bdev_nvme_get_discovery_info", 00:04:21.515 "bdev_nvme_stop_discovery", 00:04:21.515 "bdev_nvme_start_discovery", 00:04:21.515 "bdev_nvme_get_controller_health_info", 00:04:21.515 "bdev_nvme_disable_controller", 00:04:21.515 "bdev_nvme_enable_controller", 00:04:21.515 "bdev_nvme_reset_controller", 00:04:21.515 "bdev_nvme_get_transport_statistics", 00:04:21.515 "bdev_nvme_apply_firmware", 00:04:21.515 "bdev_nvme_detach_controller", 00:04:21.515 "bdev_nvme_get_controllers", 00:04:21.515 "bdev_nvme_attach_controller", 00:04:21.515 "bdev_nvme_set_hotplug", 00:04:21.515 "bdev_nvme_set_options", 00:04:21.515 "bdev_passthru_delete", 00:04:21.515 "bdev_passthru_create", 00:04:21.515 "bdev_lvol_set_parent_bdev", 00:04:21.515 "bdev_lvol_set_parent", 00:04:21.515 "bdev_lvol_check_shallow_copy", 00:04:21.515 "bdev_lvol_start_shallow_copy", 00:04:21.515 "bdev_lvol_grow_lvstore", 00:04:21.515 "bdev_lvol_get_lvols", 00:04:21.515 "bdev_lvol_get_lvstores", 00:04:21.515 "bdev_lvol_delete", 00:04:21.515 "bdev_lvol_set_read_only", 00:04:21.515 "bdev_lvol_resize", 00:04:21.515 "bdev_lvol_decouple_parent", 00:04:21.515 "bdev_lvol_inflate", 00:04:21.515 "bdev_lvol_rename", 00:04:21.515 "bdev_lvol_clone_bdev", 00:04:21.515 "bdev_lvol_clone", 00:04:21.515 "bdev_lvol_snapshot", 00:04:21.515 "bdev_lvol_create", 00:04:21.515 "bdev_lvol_delete_lvstore", 00:04:21.515 "bdev_lvol_rename_lvstore", 00:04:21.515 "bdev_lvol_create_lvstore", 00:04:21.515 "bdev_raid_set_options", 00:04:21.515 "bdev_raid_remove_base_bdev", 00:04:21.515 "bdev_raid_add_base_bdev", 00:04:21.515 "bdev_raid_delete", 00:04:21.515 "bdev_raid_create", 00:04:21.515 "bdev_raid_get_bdevs", 00:04:21.515 "bdev_error_inject_error", 00:04:21.515 "bdev_error_delete", 00:04:21.515 "bdev_error_create", 00:04:21.515 "bdev_split_delete", 00:04:21.515 "bdev_split_create", 00:04:21.515 "bdev_delay_delete", 00:04:21.515 "bdev_delay_create", 00:04:21.515 "bdev_delay_update_latency", 00:04:21.515 "bdev_zone_block_delete", 00:04:21.515 "bdev_zone_block_create", 00:04:21.515 "blobfs_create", 00:04:21.515 "blobfs_detect", 00:04:21.515 "blobfs_set_cache_size", 00:04:21.515 "bdev_aio_delete", 00:04:21.515 "bdev_aio_rescan", 00:04:21.515 "bdev_aio_create", 00:04:21.515 "bdev_ftl_set_property", 00:04:21.515 "bdev_ftl_get_properties", 00:04:21.515 "bdev_ftl_get_stats", 00:04:21.515 "bdev_ftl_unmap", 00:04:21.515 "bdev_ftl_unload", 00:04:21.515 "bdev_ftl_delete", 00:04:21.515 "bdev_ftl_load", 00:04:21.515 "bdev_ftl_create", 00:04:21.515 "bdev_virtio_attach_controller", 00:04:21.516 "bdev_virtio_scsi_get_devices", 00:04:21.516 "bdev_virtio_detach_controller", 00:04:21.516 "bdev_virtio_blk_set_hotplug", 00:04:21.516 "bdev_iscsi_delete", 00:04:21.516 "bdev_iscsi_create", 00:04:21.516 "bdev_iscsi_set_options", 00:04:21.516 "accel_error_inject_error", 00:04:21.516 "ioat_scan_accel_module", 00:04:21.516 "dsa_scan_accel_module", 00:04:21.516 "iaa_scan_accel_module", 00:04:21.516 "vfu_virtio_create_fs_endpoint", 00:04:21.516 "vfu_virtio_create_scsi_endpoint", 00:04:21.516 "vfu_virtio_scsi_remove_target", 00:04:21.516 "vfu_virtio_scsi_add_target", 00:04:21.516 "vfu_virtio_create_blk_endpoint", 00:04:21.516 "vfu_virtio_delete_endpoint", 00:04:21.516 "keyring_file_remove_key", 00:04:21.516 "keyring_file_add_key", 00:04:21.516 "keyring_linux_set_options", 00:04:21.516 "fsdev_aio_delete", 00:04:21.516 "fsdev_aio_create", 00:04:21.516 "iscsi_get_histogram", 00:04:21.516 "iscsi_enable_histogram", 00:04:21.516 "iscsi_set_options", 00:04:21.516 "iscsi_get_auth_groups", 00:04:21.516 "iscsi_auth_group_remove_secret", 00:04:21.516 "iscsi_auth_group_add_secret", 00:04:21.516 "iscsi_delete_auth_group", 00:04:21.516 "iscsi_create_auth_group", 00:04:21.516 "iscsi_set_discovery_auth", 00:04:21.516 "iscsi_get_options", 00:04:21.516 "iscsi_target_node_request_logout", 00:04:21.516 "iscsi_target_node_set_redirect", 00:04:21.516 "iscsi_target_node_set_auth", 00:04:21.516 "iscsi_target_node_add_lun", 00:04:21.516 "iscsi_get_stats", 00:04:21.516 "iscsi_get_connections", 00:04:21.516 "iscsi_portal_group_set_auth", 00:04:21.516 "iscsi_start_portal_group", 00:04:21.516 "iscsi_delete_portal_group", 00:04:21.516 "iscsi_create_portal_group", 00:04:21.516 "iscsi_get_portal_groups", 00:04:21.516 "iscsi_delete_target_node", 00:04:21.516 "iscsi_target_node_remove_pg_ig_maps", 00:04:21.516 "iscsi_target_node_add_pg_ig_maps", 00:04:21.516 "iscsi_create_target_node", 00:04:21.516 "iscsi_get_target_nodes", 00:04:21.516 "iscsi_delete_initiator_group", 00:04:21.516 "iscsi_initiator_group_remove_initiators", 00:04:21.516 "iscsi_initiator_group_add_initiators", 00:04:21.516 "iscsi_create_initiator_group", 00:04:21.516 "iscsi_get_initiator_groups", 00:04:21.516 "nvmf_set_crdt", 00:04:21.516 "nvmf_set_config", 00:04:21.516 "nvmf_set_max_subsystems", 00:04:21.516 "nvmf_stop_mdns_prr", 00:04:21.516 "nvmf_publish_mdns_prr", 00:04:21.516 "nvmf_subsystem_get_listeners", 00:04:21.516 "nvmf_subsystem_get_qpairs", 00:04:21.516 "nvmf_subsystem_get_controllers", 00:04:21.516 "nvmf_get_stats", 00:04:21.516 "nvmf_get_transports", 00:04:21.516 "nvmf_create_transport", 00:04:21.516 "nvmf_get_targets", 00:04:21.516 "nvmf_delete_target", 00:04:21.516 "nvmf_create_target", 00:04:21.516 "nvmf_subsystem_allow_any_host", 00:04:21.516 "nvmf_subsystem_set_keys", 00:04:21.516 "nvmf_subsystem_remove_host", 00:04:21.516 "nvmf_subsystem_add_host", 00:04:21.516 "nvmf_ns_remove_host", 00:04:21.516 "nvmf_ns_add_host", 00:04:21.516 "nvmf_subsystem_remove_ns", 00:04:21.516 "nvmf_subsystem_set_ns_ana_group", 00:04:21.516 "nvmf_subsystem_add_ns", 00:04:21.516 "nvmf_subsystem_listener_set_ana_state", 00:04:21.516 "nvmf_discovery_get_referrals", 00:04:21.516 "nvmf_discovery_remove_referral", 00:04:21.516 "nvmf_discovery_add_referral", 00:04:21.516 "nvmf_subsystem_remove_listener", 00:04:21.516 "nvmf_subsystem_add_listener", 00:04:21.516 "nvmf_delete_subsystem", 00:04:21.516 "nvmf_create_subsystem", 00:04:21.516 "nvmf_get_subsystems", 00:04:21.516 "env_dpdk_get_mem_stats", 00:04:21.516 "nbd_get_disks", 00:04:21.516 "nbd_stop_disk", 00:04:21.516 "nbd_start_disk", 00:04:21.516 "ublk_recover_disk", 00:04:21.516 "ublk_get_disks", 00:04:21.516 "ublk_stop_disk", 00:04:21.516 "ublk_start_disk", 00:04:21.516 "ublk_destroy_target", 00:04:21.516 "ublk_create_target", 00:04:21.516 "virtio_blk_create_transport", 00:04:21.516 "virtio_blk_get_transports", 00:04:21.516 "vhost_controller_set_coalescing", 00:04:21.516 "vhost_get_controllers", 00:04:21.516 "vhost_delete_controller", 00:04:21.516 "vhost_create_blk_controller", 00:04:21.516 "vhost_scsi_controller_remove_target", 00:04:21.516 "vhost_scsi_controller_add_target", 00:04:21.516 "vhost_start_scsi_controller", 00:04:21.516 "vhost_create_scsi_controller", 00:04:21.516 "thread_set_cpumask", 00:04:21.516 "scheduler_set_options", 00:04:21.516 "framework_get_governor", 00:04:21.516 "framework_get_scheduler", 00:04:21.516 "framework_set_scheduler", 00:04:21.516 "framework_get_reactors", 00:04:21.516 "thread_get_io_channels", 00:04:21.516 "thread_get_pollers", 00:04:21.516 "thread_get_stats", 00:04:21.516 "framework_monitor_context_switch", 00:04:21.516 "spdk_kill_instance", 00:04:21.516 "log_enable_timestamps", 00:04:21.516 "log_get_flags", 00:04:21.516 "log_clear_flag", 00:04:21.516 "log_set_flag", 00:04:21.516 "log_get_level", 00:04:21.516 "log_set_level", 00:04:21.516 "log_get_print_level", 00:04:21.516 "log_set_print_level", 00:04:21.516 "framework_enable_cpumask_locks", 00:04:21.516 "framework_disable_cpumask_locks", 00:04:21.516 "framework_wait_init", 00:04:21.516 "framework_start_init", 00:04:21.516 "scsi_get_devices", 00:04:21.516 "bdev_get_histogram", 00:04:21.516 "bdev_enable_histogram", 00:04:21.516 "bdev_set_qos_limit", 00:04:21.516 "bdev_set_qd_sampling_period", 00:04:21.516 "bdev_get_bdevs", 00:04:21.516 "bdev_reset_iostat", 00:04:21.516 "bdev_get_iostat", 00:04:21.516 "bdev_examine", 00:04:21.516 "bdev_wait_for_examine", 00:04:21.516 "bdev_set_options", 00:04:21.516 "accel_get_stats", 00:04:21.516 "accel_set_options", 00:04:21.516 "accel_set_driver", 00:04:21.516 "accel_crypto_key_destroy", 00:04:21.516 "accel_crypto_keys_get", 00:04:21.516 "accel_crypto_key_create", 00:04:21.516 "accel_assign_opc", 00:04:21.516 "accel_get_module_info", 00:04:21.516 "accel_get_opc_assignments", 00:04:21.516 "vmd_rescan", 00:04:21.516 "vmd_remove_device", 00:04:21.516 "vmd_enable", 00:04:21.516 "sock_get_default_impl", 00:04:21.516 "sock_set_default_impl", 00:04:21.516 "sock_impl_set_options", 00:04:21.516 "sock_impl_get_options", 00:04:21.516 "iobuf_get_stats", 00:04:21.516 "iobuf_set_options", 00:04:21.516 "keyring_get_keys", 00:04:21.516 "vfu_tgt_set_base_path", 00:04:21.516 "framework_get_pci_devices", 00:04:21.516 "framework_get_config", 00:04:21.516 "framework_get_subsystems", 00:04:21.516 "fsdev_set_opts", 00:04:21.516 "fsdev_get_opts", 00:04:21.516 "trace_get_info", 00:04:21.516 "trace_get_tpoint_group_mask", 00:04:21.516 "trace_disable_tpoint_group", 00:04:21.516 "trace_enable_tpoint_group", 00:04:21.516 "trace_clear_tpoint_mask", 00:04:21.516 "trace_set_tpoint_mask", 00:04:21.516 "notify_get_notifications", 00:04:21.516 "notify_get_types", 00:04:21.516 "spdk_get_version", 00:04:21.516 "rpc_get_methods" 00:04:21.516 ] 00:04:21.516 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:21.516 10:32:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.516 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:21.776 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:21.776 10:32:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2496751 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2496751 ']' 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2496751 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2496751 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2496751' 00:04:21.776 killing process with pid 2496751 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2496751 00:04:21.776 10:32:49 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2496751 00:04:22.039 00:04:22.039 real 0m1.115s 00:04:22.039 user 0m1.897s 00:04:22.039 sys 0m0.436s 00:04:22.039 10:32:49 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.039 10:32:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:22.039 ************************************ 00:04:22.039 END TEST spdkcli_tcp 00:04:22.039 ************************************ 00:04:22.039 10:32:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:22.039 10:32:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.039 10:32:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.039 10:32:49 -- common/autotest_common.sh@10 -- # set +x 00:04:22.039 ************************************ 00:04:22.039 START TEST dpdk_mem_utility 00:04:22.039 ************************************ 00:04:22.039 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:22.039 * Looking for test storage... 00:04:22.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:22.039 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:22.039 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:22.039 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.298 10:32:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.298 --rc genhtml_branch_coverage=1 00:04:22.298 --rc genhtml_function_coverage=1 00:04:22.298 --rc genhtml_legend=1 00:04:22.298 --rc geninfo_all_blocks=1 00:04:22.298 --rc geninfo_unexecuted_blocks=1 00:04:22.298 00:04:22.298 ' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.298 --rc genhtml_branch_coverage=1 00:04:22.298 --rc genhtml_function_coverage=1 00:04:22.298 --rc genhtml_legend=1 00:04:22.298 --rc geninfo_all_blocks=1 00:04:22.298 --rc geninfo_unexecuted_blocks=1 00:04:22.298 00:04:22.298 ' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.298 --rc genhtml_branch_coverage=1 00:04:22.298 --rc genhtml_function_coverage=1 00:04:22.298 --rc genhtml_legend=1 00:04:22.298 --rc geninfo_all_blocks=1 00:04:22.298 --rc geninfo_unexecuted_blocks=1 00:04:22.298 00:04:22.298 ' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:22.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.298 --rc genhtml_branch_coverage=1 00:04:22.298 --rc genhtml_function_coverage=1 00:04:22.298 --rc genhtml_legend=1 00:04:22.298 --rc geninfo_all_blocks=1 00:04:22.298 --rc geninfo_unexecuted_blocks=1 00:04:22.298 00:04:22.298 ' 00:04:22.298 10:32:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:22.298 10:32:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2497056 00:04:22.298 10:32:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2497056 00:04:22.298 10:32:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2497056 ']' 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.298 10:32:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.298 [2024-11-07 10:32:49.839131] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:22.298 [2024-11-07 10:32:49.839177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497056 ] 00:04:22.298 [2024-11-07 10:32:49.900273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.298 [2024-11-07 10:32:49.940317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.556 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:22.556 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:22.556 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:22.556 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:22.556 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:22.556 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:22.556 { 00:04:22.556 "filename": "/tmp/spdk_mem_dump.txt" 00:04:22.556 } 00:04:22.556 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.556 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:22.556 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:22.556 1 heaps totaling size 810.000000 MiB 00:04:22.556 size: 810.000000 MiB heap id: 0 00:04:22.556 end heaps---------- 00:04:22.556 9 mempools totaling size 595.772034 MiB 00:04:22.556 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:22.556 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:22.556 size: 92.545471 MiB name: bdev_io_2497056 00:04:22.556 size: 50.003479 MiB name: msgpool_2497056 00:04:22.556 size: 36.509338 MiB name: fsdev_io_2497056 00:04:22.556 size: 21.763794 MiB name: PDU_Pool 00:04:22.556 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:22.556 size: 4.133484 MiB name: evtpool_2497056 00:04:22.556 size: 0.026123 MiB name: Session_Pool 00:04:22.556 end mempools------- 00:04:22.556 6 memzones totaling size 4.142822 MiB 00:04:22.556 size: 1.000366 MiB name: RG_ring_0_2497056 00:04:22.556 size: 1.000366 MiB name: RG_ring_1_2497056 00:04:22.556 size: 1.000366 MiB name: RG_ring_4_2497056 00:04:22.556 size: 1.000366 MiB name: RG_ring_5_2497056 00:04:22.556 size: 0.125366 MiB name: RG_ring_2_2497056 00:04:22.556 size: 0.015991 MiB name: RG_ring_3_2497056 00:04:22.556 end memzones------- 00:04:22.556 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:22.815 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:22.815 list of free elements. size: 10.862488 MiB 00:04:22.815 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:22.815 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:22.815 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:22.815 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:22.815 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:22.815 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:22.815 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:22.815 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:22.815 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:22.815 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:22.815 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:22.815 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:22.815 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:22.815 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:22.815 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:22.815 list of standard malloc elements. size: 199.218628 MiB 00:04:22.815 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:22.815 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:22.815 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:22.815 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:22.815 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:22.815 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:22.815 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:22.815 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:22.815 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:22.815 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:22.815 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:22.815 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:22.815 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:22.815 list of memzone associated elements. size: 599.918884 MiB 00:04:22.815 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:22.815 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:22.815 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:22.815 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:22.815 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:22.815 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2497056_0 00:04:22.815 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:22.815 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2497056_0 00:04:22.815 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:22.815 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2497056_0 00:04:22.815 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:22.815 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:22.815 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:22.815 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:22.815 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:22.815 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2497056_0 00:04:22.815 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:22.815 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2497056 00:04:22.815 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:22.815 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2497056 00:04:22.815 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:22.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:22.815 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:22.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:22.815 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:22.815 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:22.815 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:22.815 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:22.815 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:22.815 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2497056 00:04:22.815 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:22.815 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2497056 00:04:22.815 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:22.815 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2497056 00:04:22.815 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:22.815 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2497056 00:04:22.815 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:22.815 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2497056 00:04:22.815 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:22.815 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2497056 00:04:22.815 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:22.815 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:22.815 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:22.815 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:22.815 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:22.815 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:22.815 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:22.815 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2497056 00:04:22.815 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:22.815 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2497056 00:04:22.815 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:22.815 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:22.815 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:22.815 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:22.815 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:22.815 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2497056 00:04:22.815 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:22.815 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:22.815 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:22.815 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2497056 00:04:22.815 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:22.815 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2497056 00:04:22.815 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:22.816 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2497056 00:04:22.816 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:22.816 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:22.816 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:22.816 10:32:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2497056 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2497056 ']' 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2497056 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2497056 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2497056' 00:04:22.816 killing process with pid 2497056 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2497056 00:04:22.816 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2497056 00:04:23.074 00:04:23.074 real 0m1.009s 00:04:23.074 user 0m0.955s 00:04:23.074 sys 0m0.391s 00:04:23.074 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:23.074 10:32:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:23.074 ************************************ 00:04:23.074 END TEST dpdk_mem_utility 00:04:23.074 ************************************ 00:04:23.074 10:32:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.074 10:32:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:23.074 10:32:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.074 10:32:50 -- common/autotest_common.sh@10 -- # set +x 00:04:23.075 ************************************ 00:04:23.075 START TEST event 00:04:23.075 ************************************ 00:04:23.075 10:32:50 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:23.343 * Looking for test storage... 00:04:23.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:23.344 10:32:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.344 10:32:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.344 10:32:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.344 10:32:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.344 10:32:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.344 10:32:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.344 10:32:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.344 10:32:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.344 10:32:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.344 10:32:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.344 10:32:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.344 10:32:50 event -- scripts/common.sh@344 -- # case "$op" in 00:04:23.344 10:32:50 event -- scripts/common.sh@345 -- # : 1 00:04:23.344 10:32:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.344 10:32:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.344 10:32:50 event -- scripts/common.sh@365 -- # decimal 1 00:04:23.344 10:32:50 event -- scripts/common.sh@353 -- # local d=1 00:04:23.344 10:32:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.344 10:32:50 event -- scripts/common.sh@355 -- # echo 1 00:04:23.344 10:32:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.344 10:32:50 event -- scripts/common.sh@366 -- # decimal 2 00:04:23.344 10:32:50 event -- scripts/common.sh@353 -- # local d=2 00:04:23.344 10:32:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.344 10:32:50 event -- scripts/common.sh@355 -- # echo 2 00:04:23.344 10:32:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.344 10:32:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.344 10:32:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.344 10:32:50 event -- scripts/common.sh@368 -- # return 0 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:23.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.344 --rc genhtml_branch_coverage=1 00:04:23.344 --rc genhtml_function_coverage=1 00:04:23.344 --rc genhtml_legend=1 00:04:23.344 --rc geninfo_all_blocks=1 00:04:23.344 --rc geninfo_unexecuted_blocks=1 00:04:23.344 00:04:23.344 ' 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:23.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.344 --rc genhtml_branch_coverage=1 00:04:23.344 --rc genhtml_function_coverage=1 00:04:23.344 --rc genhtml_legend=1 00:04:23.344 --rc geninfo_all_blocks=1 00:04:23.344 --rc geninfo_unexecuted_blocks=1 00:04:23.344 00:04:23.344 ' 00:04:23.344 10:32:50 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:23.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.345 --rc genhtml_branch_coverage=1 00:04:23.345 --rc genhtml_function_coverage=1 00:04:23.345 --rc genhtml_legend=1 00:04:23.345 --rc geninfo_all_blocks=1 00:04:23.345 --rc geninfo_unexecuted_blocks=1 00:04:23.345 00:04:23.345 ' 00:04:23.345 10:32:50 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:23.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.345 --rc genhtml_branch_coverage=1 00:04:23.345 --rc genhtml_function_coverage=1 00:04:23.345 --rc genhtml_legend=1 00:04:23.345 --rc geninfo_all_blocks=1 00:04:23.345 --rc geninfo_unexecuted_blocks=1 00:04:23.345 00:04:23.345 ' 00:04:23.345 10:32:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:23.345 10:32:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:23.345 10:32:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.345 10:32:50 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:23.345 10:32:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:23.345 10:32:50 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.345 ************************************ 00:04:23.345 START TEST event_perf 00:04:23.345 ************************************ 00:04:23.345 10:32:50 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:23.345 Running I/O for 1 seconds...[2024-11-07 10:32:50.913857] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:23.345 [2024-11-07 10:32:50.913927] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497346 ] 00:04:23.345 [2024-11-07 10:32:50.980946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:23.606 [2024-11-07 10:32:51.026614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.606 [2024-11-07 10:32:51.026709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:23.606 [2024-11-07 10:32:51.026796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:23.606 [2024-11-07 10:32:51.026798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.542 Running I/O for 1 seconds... 00:04:24.542 lcore 0: 206794 00:04:24.542 lcore 1: 206795 00:04:24.542 lcore 2: 206795 00:04:24.542 lcore 3: 206796 00:04:24.542 done. 00:04:24.542 00:04:24.542 real 0m1.175s 00:04:24.542 user 0m4.098s 00:04:24.542 sys 0m0.075s 00:04:24.542 10:32:52 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.542 10:32:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.542 ************************************ 00:04:24.542 END TEST event_perf 00:04:24.542 ************************************ 00:04:24.542 10:32:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.542 10:32:52 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:24.543 10:32:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.543 10:32:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.543 ************************************ 00:04:24.543 START TEST event_reactor 00:04:24.543 ************************************ 00:04:24.543 10:32:52 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:24.543 [2024-11-07 10:32:52.152168] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:24.543 [2024-11-07 10:32:52.152241] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497514 ] 00:04:24.801 [2024-11-07 10:32:52.217836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.801 [2024-11-07 10:32:52.258643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.738 test_start 00:04:25.738 oneshot 00:04:25.738 tick 100 00:04:25.738 tick 100 00:04:25.738 tick 250 00:04:25.738 tick 100 00:04:25.738 tick 100 00:04:25.738 tick 100 00:04:25.738 tick 250 00:04:25.738 tick 500 00:04:25.738 tick 100 00:04:25.738 tick 100 00:04:25.738 tick 250 00:04:25.738 tick 100 00:04:25.738 tick 100 00:04:25.738 test_end 00:04:25.738 00:04:25.738 real 0m1.163s 00:04:25.738 user 0m1.102s 00:04:25.738 sys 0m0.058s 00:04:25.738 10:32:53 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:25.738 10:32:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:25.738 ************************************ 00:04:25.738 END TEST event_reactor 00:04:25.738 ************************************ 00:04:25.738 10:32:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:25.738 10:32:53 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:25.738 10:32:53 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.738 10:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.738 ************************************ 00:04:25.738 START TEST event_reactor_perf 00:04:25.738 ************************************ 00:04:25.738 10:32:53 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:25.738 [2024-11-07 10:32:53.380134] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:25.738 [2024-11-07 10:32:53.380203] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497672 ] 00:04:25.998 [2024-11-07 10:32:53.446242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.998 [2024-11-07 10:32:53.486737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.935 test_start 00:04:26.935 test_end 00:04:26.935 Performance: 505220 events per second 00:04:26.935 00:04:26.935 real 0m1.165s 00:04:26.935 user 0m1.101s 00:04:26.935 sys 0m0.060s 00:04:26.935 10:32:54 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.935 10:32:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.935 ************************************ 00:04:26.935 END TEST event_reactor_perf 00:04:26.935 ************************************ 00:04:26.935 10:32:54 event -- event/event.sh@49 -- # uname -s 00:04:26.935 10:32:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:26.935 10:32:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:26.935 10:32:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.935 10:32:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.935 10:32:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.935 ************************************ 00:04:26.935 START TEST event_scheduler 00:04:26.935 ************************************ 00:04:26.935 10:32:54 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:27.194 * Looking for test storage... 00:04:27.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.194 10:32:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.194 --rc genhtml_branch_coverage=1 00:04:27.194 --rc genhtml_function_coverage=1 00:04:27.194 --rc genhtml_legend=1 00:04:27.194 --rc geninfo_all_blocks=1 00:04:27.194 --rc geninfo_unexecuted_blocks=1 00:04:27.194 00:04:27.194 ' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.194 --rc genhtml_branch_coverage=1 00:04:27.194 --rc genhtml_function_coverage=1 00:04:27.194 --rc genhtml_legend=1 00:04:27.194 --rc geninfo_all_blocks=1 00:04:27.194 --rc geninfo_unexecuted_blocks=1 00:04:27.194 00:04:27.194 ' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.194 --rc genhtml_branch_coverage=1 00:04:27.194 --rc genhtml_function_coverage=1 00:04:27.194 --rc genhtml_legend=1 00:04:27.194 --rc geninfo_all_blocks=1 00:04:27.194 --rc geninfo_unexecuted_blocks=1 00:04:27.194 00:04:27.194 ' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.194 --rc genhtml_branch_coverage=1 00:04:27.194 --rc genhtml_function_coverage=1 00:04:27.194 --rc genhtml_legend=1 00:04:27.194 --rc geninfo_all_blocks=1 00:04:27.194 --rc geninfo_unexecuted_blocks=1 00:04:27.194 00:04:27.194 ' 00:04:27.194 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:27.194 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2497998 00:04:27.194 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.194 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:27.194 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2497998 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2497998 ']' 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.194 10:32:54 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.195 10:32:54 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.195 10:32:54 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.195 10:32:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.195 [2024-11-07 10:32:54.811526] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:27.195 [2024-11-07 10:32:54.811577] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497998 ] 00:04:27.454 [2024-11-07 10:32:54.873741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:27.454 [2024-11-07 10:32:54.917981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.454 [2024-11-07 10:32:54.918068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.454 [2024-11-07 10:32:54.918151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.454 [2024-11-07 10:32:54.918154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:27.454 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.454 [2024-11-07 10:32:54.986748] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:27.454 [2024-11-07 10:32:54.986767] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:27.454 [2024-11-07 10:32:54.986776] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:27.454 [2024-11-07 10:32:54.986782] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:27.454 [2024-11-07 10:32:54.986787] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.454 10:32:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.454 10:32:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.454 [2024-11-07 10:32:55.061133] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:27.454 10:32:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.454 10:32:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:27.454 10:32:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.454 10:32:55 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.454 10:32:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:27.454 ************************************ 00:04:27.454 START TEST scheduler_create_thread 00:04:27.454 ************************************ 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.454 2 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.454 3 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.454 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 4 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 5 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 6 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 7 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 8 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 9 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 10 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:27.713 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.280 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.280 10:32:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:28.280 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.280 10:32:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:29.657 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.657 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:29.657 10:32:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:29.657 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.657 10:32:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.593 00:04:30.593 real 0m3.097s 00:04:30.593 user 0m0.025s 00:04:30.593 sys 0m0.005s 00:04:30.593 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.593 10:32:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.593 ************************************ 00:04:30.593 END TEST scheduler_create_thread 00:04:30.593 ************************************ 00:04:30.593 10:32:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:30.593 10:32:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2497998 00:04:30.593 10:32:58 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2497998 ']' 00:04:30.593 10:32:58 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2497998 00:04:30.593 10:32:58 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:30.593 10:32:58 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.593 10:32:58 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2497998 00:04:30.852 10:32:58 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:30.852 10:32:58 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:30.852 10:32:58 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2497998' 00:04:30.852 killing process with pid 2497998 00:04:30.852 10:32:58 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2497998 00:04:30.852 10:32:58 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2497998 00:04:31.111 [2024-11-07 10:32:58.576550] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:31.111 00:04:31.111 real 0m4.167s 00:04:31.111 user 0m6.730s 00:04:31.111 sys 0m0.359s 00:04:31.111 10:32:58 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.111 10:32:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:31.111 ************************************ 00:04:31.111 END TEST event_scheduler 00:04:31.111 ************************************ 00:04:31.369 10:32:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:31.369 10:32:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:31.369 10:32:58 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.369 10:32:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.369 10:32:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:31.369 ************************************ 00:04:31.369 START TEST app_repeat 00:04:31.369 ************************************ 00:04:31.369 10:32:58 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2498714 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2498714' 00:04:31.369 Process app_repeat pid: 2498714 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:31.369 spdk_app_start Round 0 00:04:31.369 10:32:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2498714 /var/tmp/spdk-nbd.sock 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2498714 ']' 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.370 10:32:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.370 [2024-11-07 10:32:58.857041] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:31.370 [2024-11-07 10:32:58.857083] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498714 ] 00:04:31.370 [2024-11-07 10:32:58.919392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.370 [2024-11-07 10:32:58.965726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.370 [2024-11-07 10:32:58.965731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.629 10:32:59 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:31.629 10:32:59 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:31.629 10:32:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.629 Malloc0 00:04:31.629 10:32:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.887 Malloc1 00:04:31.887 10:32:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.887 10:32:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:32.147 /dev/nbd0 00:04:32.147 10:32:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:32.147 10:32:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.147 1+0 records in 00:04:32.147 1+0 records out 00:04:32.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175605 s, 23.3 MB/s 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:32.147 10:32:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:32.148 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.148 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.148 10:32:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.411 /dev/nbd1 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.411 1+0 records in 00:04:32.411 1+0 records out 00:04:32.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214735 s, 19.1 MB/s 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:32.411 10:32:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.411 10:32:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.669 10:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.669 { 00:04:32.669 "nbd_device": "/dev/nbd0", 00:04:32.669 "bdev_name": "Malloc0" 00:04:32.669 }, 00:04:32.669 { 00:04:32.669 "nbd_device": "/dev/nbd1", 00:04:32.669 "bdev_name": "Malloc1" 00:04:32.669 } 00:04:32.669 ]' 00:04:32.669 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.669 { 00:04:32.670 "nbd_device": "/dev/nbd0", 00:04:32.670 "bdev_name": "Malloc0" 00:04:32.670 }, 00:04:32.670 { 00:04:32.670 "nbd_device": "/dev/nbd1", 00:04:32.670 "bdev_name": "Malloc1" 00:04:32.670 } 00:04:32.670 ]' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.670 /dev/nbd1' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.670 /dev/nbd1' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.670 256+0 records in 00:04:32.670 256+0 records out 00:04:32.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102447 s, 102 MB/s 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.670 256+0 records in 00:04:32.670 256+0 records out 00:04:32.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140296 s, 74.7 MB/s 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.670 256+0 records in 00:04:32.670 256+0 records out 00:04:32.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154332 s, 67.9 MB/s 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.670 10:33:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.928 10:33:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.186 10:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.444 10:33:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.444 10:33:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.703 10:33:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.703 [2024-11-07 10:33:01.301577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.703 [2024-11-07 10:33:01.339452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.703 [2024-11-07 10:33:01.339455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.961 [2024-11-07 10:33:01.380489] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.961 [2024-11-07 10:33:01.380538] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.489 10:33:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:36.489 10:33:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:36.489 spdk_app_start Round 1 00:04:36.489 10:33:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2498714 /var/tmp/spdk-nbd.sock 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2498714 ']' 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.489 10:33:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.746 10:33:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.746 10:33:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:36.746 10:33:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.004 Malloc0 00:04:37.004 10:33:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.262 Malloc1 00:04:37.262 10:33:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.262 10:33:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.520 /dev/nbd0 00:04:37.520 10:33:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.520 10:33:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.520 1+0 records in 00:04:37.520 1+0 records out 00:04:37.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244855 s, 16.7 MB/s 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:37.520 10:33:04 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:37.520 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.520 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.520 10:33:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.778 /dev/nbd1 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.778 1+0 records in 00:04:37.778 1+0 records out 00:04:37.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195151 s, 21.0 MB/s 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:37.778 10:33:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.778 { 00:04:37.778 "nbd_device": "/dev/nbd0", 00:04:37.778 "bdev_name": "Malloc0" 00:04:37.778 }, 00:04:37.778 { 00:04:37.778 "nbd_device": "/dev/nbd1", 00:04:37.778 "bdev_name": "Malloc1" 00:04:37.778 } 00:04:37.778 ]' 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.778 { 00:04:37.778 "nbd_device": "/dev/nbd0", 00:04:37.778 "bdev_name": "Malloc0" 00:04:37.778 }, 00:04:37.778 { 00:04:37.778 "nbd_device": "/dev/nbd1", 00:04:37.778 "bdev_name": "Malloc1" 00:04:37.778 } 00:04:37.778 ]' 00:04:37.778 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:38.036 /dev/nbd1' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:38.036 /dev/nbd1' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:38.036 256+0 records in 00:04:38.036 256+0 records out 00:04:38.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107057 s, 97.9 MB/s 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:38.036 256+0 records in 00:04:38.036 256+0 records out 00:04:38.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141269 s, 74.2 MB/s 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.036 256+0 records in 00:04:38.036 256+0 records out 00:04:38.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149604 s, 70.1 MB/s 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.036 10:33:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.295 10:33:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.553 10:33:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.553 10:33:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.553 10:33:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:38.810 10:33:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:39.068 [2024-11-07 10:33:06.570513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.068 [2024-11-07 10:33:06.608535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.068 [2024-11-07 10:33:06.608539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.068 [2024-11-07 10:33:06.649806] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.068 [2024-11-07 10:33:06.649848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.347 10:33:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:42.347 10:33:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:42.347 spdk_app_start Round 2 00:04:42.347 10:33:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2498714 /var/tmp/spdk-nbd.sock 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2498714 ']' 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.347 10:33:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:42.347 10:33:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.347 Malloc0 00:04:42.347 10:33:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.347 Malloc1 00:04:42.605 10:33:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.605 /dev/nbd0 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.605 1+0 records in 00:04:42.605 1+0 records out 00:04:42.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157892 s, 25.9 MB/s 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:42.605 10:33:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.605 10:33:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:42.864 /dev/nbd1 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.864 1+0 records in 00:04:42.864 1+0 records out 00:04:42.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207139 s, 19.8 MB/s 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:42.864 10:33:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.864 10:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.122 { 00:04:43.122 "nbd_device": "/dev/nbd0", 00:04:43.122 "bdev_name": "Malloc0" 00:04:43.122 }, 00:04:43.122 { 00:04:43.122 "nbd_device": "/dev/nbd1", 00:04:43.122 "bdev_name": "Malloc1" 00:04:43.122 } 00:04:43.122 ]' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.122 { 00:04:43.122 "nbd_device": "/dev/nbd0", 00:04:43.122 "bdev_name": "Malloc0" 00:04:43.122 }, 00:04:43.122 { 00:04:43.122 "nbd_device": "/dev/nbd1", 00:04:43.122 "bdev_name": "Malloc1" 00:04:43.122 } 00:04:43.122 ]' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.122 /dev/nbd1' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.122 /dev/nbd1' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.122 256+0 records in 00:04:43.122 256+0 records out 00:04:43.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105239 s, 99.6 MB/s 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.122 256+0 records in 00:04:43.122 256+0 records out 00:04:43.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136125 s, 77.0 MB/s 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.122 10:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.381 256+0 records in 00:04:43.381 256+0 records out 00:04:43.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149044 s, 70.4 MB/s 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.381 10:33:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.381 10:33:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.640 10:33:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:43.898 10:33:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:43.898 10:33:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.156 10:33:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.414 [2024-11-07 10:33:11.842073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.414 [2024-11-07 10:33:11.880383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.414 [2024-11-07 10:33:11.880387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.414 [2024-11-07 10:33:11.921453] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:44.414 [2024-11-07 10:33:11.921495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.700 10:33:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2498714 /var/tmp/spdk-nbd.sock 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2498714 ']' 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:47.700 10:33:14 event.app_repeat -- event/event.sh@39 -- # killprocess 2498714 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2498714 ']' 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2498714 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2498714 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2498714' 00:04:47.700 killing process with pid 2498714 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2498714 00:04:47.700 10:33:14 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2498714 00:04:47.700 spdk_app_start is called in Round 0. 00:04:47.700 Shutdown signal received, stop current app iteration 00:04:47.700 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:04:47.700 spdk_app_start is called in Round 1. 00:04:47.700 Shutdown signal received, stop current app iteration 00:04:47.700 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:04:47.700 spdk_app_start is called in Round 2. 00:04:47.700 Shutdown signal received, stop current app iteration 00:04:47.700 Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 reinitialization... 00:04:47.700 spdk_app_start is called in Round 3. 00:04:47.700 Shutdown signal received, stop current app iteration 00:04:47.700 10:33:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:47.700 10:33:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:47.700 00:04:47.700 real 0m16.231s 00:04:47.700 user 0m35.570s 00:04:47.700 sys 0m2.538s 00:04:47.700 10:33:15 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.700 10:33:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.700 ************************************ 00:04:47.700 END TEST app_repeat 00:04:47.700 ************************************ 00:04:47.700 10:33:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:47.700 10:33:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.700 10:33:15 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.700 10:33:15 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.700 10:33:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.700 ************************************ 00:04:47.700 START TEST cpu_locks 00:04:47.700 ************************************ 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:47.700 * Looking for test storage... 00:04:47.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.700 10:33:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:47.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.700 --rc genhtml_branch_coverage=1 00:04:47.700 --rc genhtml_function_coverage=1 00:04:47.700 --rc genhtml_legend=1 00:04:47.700 --rc geninfo_all_blocks=1 00:04:47.700 --rc geninfo_unexecuted_blocks=1 00:04:47.700 00:04:47.700 ' 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:47.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.700 --rc genhtml_branch_coverage=1 00:04:47.700 --rc genhtml_function_coverage=1 00:04:47.700 --rc genhtml_legend=1 00:04:47.700 --rc geninfo_all_blocks=1 00:04:47.700 --rc geninfo_unexecuted_blocks=1 00:04:47.700 00:04:47.700 ' 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:47.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.700 --rc genhtml_branch_coverage=1 00:04:47.700 --rc genhtml_function_coverage=1 00:04:47.700 --rc genhtml_legend=1 00:04:47.700 --rc geninfo_all_blocks=1 00:04:47.700 --rc geninfo_unexecuted_blocks=1 00:04:47.700 00:04:47.700 ' 00:04:47.700 10:33:15 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:47.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.700 --rc genhtml_branch_coverage=1 00:04:47.700 --rc genhtml_function_coverage=1 00:04:47.700 --rc genhtml_legend=1 00:04:47.700 --rc geninfo_all_blocks=1 00:04:47.700 --rc geninfo_unexecuted_blocks=1 00:04:47.700 00:04:47.700 ' 00:04:47.701 10:33:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:47.701 10:33:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:47.701 10:33:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:47.701 10:33:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:47.701 10:33:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.701 10:33:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.701 10:33:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.701 ************************************ 00:04:47.701 START TEST default_locks 00:04:47.701 ************************************ 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2502205 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2502205 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2502205 ']' 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.701 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.960 [2024-11-07 10:33:15.387522] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:47.960 [2024-11-07 10:33:15.387566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502205 ] 00:04:47.960 [2024-11-07 10:33:15.449856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.960 [2024-11-07 10:33:15.491084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.218 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.218 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:48.218 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2502205 00:04:48.218 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2502205 00:04:48.218 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.477 lslocks: write error 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2502205 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2502205 ']' 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2502205 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2502205 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2502205' 00:04:48.477 killing process with pid 2502205 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2502205 00:04:48.477 10:33:15 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2502205 00:04:48.736 10:33:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2502205 00:04:48.736 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2502205 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2502205 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2502205 ']' 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2502205) - No such process 00:04:48.737 ERROR: process (pid: 2502205) is no longer running 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:48.737 00:04:48.737 real 0m0.929s 00:04:48.737 user 0m0.902s 00:04:48.737 sys 0m0.431s 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.737 10:33:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.737 ************************************ 00:04:48.737 END TEST default_locks 00:04:48.737 ************************************ 00:04:48.737 10:33:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:48.737 10:33:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.737 10:33:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.737 10:33:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.737 ************************************ 00:04:48.737 START TEST default_locks_via_rpc 00:04:48.737 ************************************ 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2502432 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2502432 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2502432 ']' 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.737 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.737 [2024-11-07 10:33:16.393764] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:48.737 [2024-11-07 10:33:16.393810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502432 ] 00:04:48.996 [2024-11-07 10:33:16.455760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.996 [2024-11-07 10:33:16.493412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2502432 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2502432 00:04:49.254 10:33:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2502432 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2502432 ']' 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2502432 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2502432 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2502432' 00:04:49.513 killing process with pid 2502432 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2502432 00:04:49.513 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2502432 00:04:49.772 00:04:49.772 real 0m1.075s 00:04:49.772 user 0m1.036s 00:04:49.772 sys 0m0.474s 00:04:49.772 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.772 10:33:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.772 ************************************ 00:04:49.772 END TEST default_locks_via_rpc 00:04:49.772 ************************************ 00:04:50.030 10:33:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:50.030 10:33:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:50.030 10:33:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.030 10:33:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.030 ************************************ 00:04:50.030 START TEST non_locking_app_on_locked_coremask 00:04:50.030 ************************************ 00:04:50.030 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:50.030 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2502686 00:04:50.030 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2502686 /var/tmp/spdk.sock 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2502686 ']' 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.031 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.031 [2024-11-07 10:33:17.534929] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:50.031 [2024-11-07 10:33:17.534973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502686 ] 00:04:50.031 [2024-11-07 10:33:17.596851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.031 [2024-11-07 10:33:17.639292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2502689 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2502689 /var/tmp/spdk2.sock 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2502689 ']' 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.290 10:33:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.290 [2024-11-07 10:33:17.899556] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:50.290 [2024-11-07 10:33:17.899608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502689 ] 00:04:50.549 [2024-11-07 10:33:17.987373] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.549 [2024-11-07 10:33:17.987398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.549 [2024-11-07 10:33:18.072682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.115 10:33:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.116 10:33:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:51.116 10:33:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2502686 00:04:51.116 10:33:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2502686 00:04:51.116 10:33:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.683 lslocks: write error 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2502686 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2502686 ']' 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2502686 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2502686 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2502686' 00:04:51.683 killing process with pid 2502686 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2502686 00:04:51.683 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2502686 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2502689 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2502689 ']' 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2502689 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2502689 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2502689' 00:04:52.249 killing process with pid 2502689 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2502689 00:04:52.249 10:33:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2502689 00:04:52.815 00:04:52.815 real 0m2.727s 00:04:52.815 user 0m2.890s 00:04:52.815 sys 0m0.884s 00:04:52.815 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.815 10:33:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.815 ************************************ 00:04:52.815 END TEST non_locking_app_on_locked_coremask 00:04:52.815 ************************************ 00:04:52.815 10:33:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:52.815 10:33:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.815 10:33:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.815 10:33:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.815 ************************************ 00:04:52.815 START TEST locking_app_on_unlocked_coremask 00:04:52.815 ************************************ 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2503184 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2503184 /var/tmp/spdk.sock 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2503184 ']' 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.815 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:52.815 [2024-11-07 10:33:20.329502] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:52.815 [2024-11-07 10:33:20.329546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503184 ] 00:04:52.815 [2024-11-07 10:33:20.391244] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:52.815 [2024-11-07 10:33:20.391271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.815 [2024-11-07 10:33:20.428939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2503195 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2503195 /var/tmp/spdk2.sock 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2503195 ']' 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:53.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.074 10:33:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.074 [2024-11-07 10:33:20.687879] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:53.074 [2024-11-07 10:33:20.687928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503195 ] 00:04:53.332 [2024-11-07 10:33:20.779940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.332 [2024-11-07 10:33:20.860985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.899 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.899 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:53.899 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2503195 00:04:53.899 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2503195 00:04:53.899 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:54.466 lslocks: write error 00:04:54.466 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2503184 00:04:54.466 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2503184 ']' 00:04:54.466 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2503184 00:04:54.466 10:33:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2503184 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2503184' 00:04:54.466 killing process with pid 2503184 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2503184 00:04:54.466 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2503184 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2503195 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2503195 ']' 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2503195 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.033 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2503195 00:04:55.291 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.291 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.291 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2503195' 00:04:55.291 killing process with pid 2503195 00:04:55.291 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2503195 00:04:55.291 10:33:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2503195 00:04:55.549 00:04:55.549 real 0m2.729s 00:04:55.549 user 0m2.883s 00:04:55.549 sys 0m0.879s 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.549 ************************************ 00:04:55.549 END TEST locking_app_on_unlocked_coremask 00:04:55.549 ************************************ 00:04:55.549 10:33:23 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:55.549 10:33:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.549 10:33:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.549 10:33:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.549 ************************************ 00:04:55.549 START TEST locking_app_on_locked_coremask 00:04:55.549 ************************************ 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2503683 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2503683 /var/tmp/spdk.sock 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2503683 ']' 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.549 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.549 [2024-11-07 10:33:23.129647] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:55.549 [2024-11-07 10:33:23.129690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503683 ] 00:04:55.549 [2024-11-07 10:33:23.190614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.808 [2024-11-07 10:33:23.229095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2503692 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2503692 /var/tmp/spdk2.sock 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2503692 /var/tmp/spdk2.sock 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:55.808 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2503692 /var/tmp/spdk2.sock 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2503692 ']' 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:55.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.809 10:33:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.067 [2024-11-07 10:33:23.483866] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:56.067 [2024-11-07 10:33:23.483909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503692 ] 00:04:56.067 [2024-11-07 10:33:23.574620] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2503683 has claimed it. 00:04:56.067 [2024-11-07 10:33:23.574660] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:56.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2503692) - No such process 00:04:56.633 ERROR: process (pid: 2503692) is no longer running 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2503683 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2503683 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.633 lslocks: write error 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2503683 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2503683 ']' 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2503683 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:56.633 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2503683 00:04:56.892 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.892 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.892 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2503683' 00:04:56.892 killing process with pid 2503683 00:04:56.892 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2503683 00:04:56.892 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2503683 00:04:57.150 00:04:57.150 real 0m1.551s 00:04:57.150 user 0m1.684s 00:04:57.150 sys 0m0.500s 00:04:57.150 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.150 10:33:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.150 ************************************ 00:04:57.151 END TEST locking_app_on_locked_coremask 00:04:57.151 ************************************ 00:04:57.151 10:33:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:57.151 10:33:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.151 10:33:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.151 10:33:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.151 ************************************ 00:04:57.151 START TEST locking_overlapped_coremask 00:04:57.151 ************************************ 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2503948 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2503948 /var/tmp/spdk.sock 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2503948 ']' 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.151 10:33:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.151 [2024-11-07 10:33:24.755763] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:57.151 [2024-11-07 10:33:24.755809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503948 ] 00:04:57.151 [2024-11-07 10:33:24.818738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.409 [2024-11-07 10:33:24.862638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.409 [2024-11-07 10:33:24.862736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.409 [2024-11-07 10:33:24.862736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2504070 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2504070 /var/tmp/spdk2.sock 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2504070 /var/tmp/spdk2.sock 00:04:57.409 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2504070 /var/tmp/spdk2.sock 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2504070 ']' 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.667 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.667 [2024-11-07 10:33:25.134100] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:57.667 [2024-11-07 10:33:25.134168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504070 ] 00:04:57.667 [2024-11-07 10:33:25.245907] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2503948 has claimed it. 00:04:57.667 [2024-11-07 10:33:25.245947] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2504070) - No such process 00:04:58.233 ERROR: process (pid: 2504070) is no longer running 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2503948 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2503948 ']' 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2503948 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2503948 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2503948' 00:04:58.233 killing process with pid 2503948 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2503948 00:04:58.233 10:33:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2503948 00:04:58.507 00:04:58.507 real 0m1.429s 00:04:58.507 user 0m3.965s 00:04:58.507 sys 0m0.389s 00:04:58.507 10:33:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.507 10:33:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:58.507 ************************************ 00:04:58.507 END TEST locking_overlapped_coremask 00:04:58.507 ************************************ 00:04:58.507 10:33:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.507 10:33:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.507 10:33:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.507 10:33:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:58.794 ************************************ 00:04:58.794 START TEST locking_overlapped_coremask_via_rpc 00:04:58.794 ************************************ 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2504216 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2504216 /var/tmp/spdk.sock 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2504216 ']' 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.794 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.794 [2024-11-07 10:33:26.247502] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:58.794 [2024-11-07 10:33:26.247556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504216 ] 00:04:58.794 [2024-11-07 10:33:26.312569] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.794 [2024-11-07 10:33:26.312593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.794 [2024-11-07 10:33:26.355195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.794 [2024-11-07 10:33:26.355294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.794 [2024-11-07 10:33:26.355295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2504417 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2504417 /var/tmp/spdk2.sock 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2504417 ']' 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.068 10:33:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.068 [2024-11-07 10:33:26.600864] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:04:59.068 [2024-11-07 10:33:26.600915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504417 ] 00:04:59.068 [2024-11-07 10:33:26.694448] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.068 [2024-11-07 10:33:26.694481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:59.326 [2024-11-07 10:33:26.782707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.326 [2024-11-07 10:33:26.786473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.326 [2024-11-07 10:33:26.786474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.891 [2024-11-07 10:33:27.480504] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2504216 has claimed it. 00:04:59.891 request: 00:04:59.891 { 00:04:59.891 "method": "framework_enable_cpumask_locks", 00:04:59.891 "req_id": 1 00:04:59.891 } 00:04:59.891 Got JSON-RPC error response 00:04:59.891 response: 00:04:59.891 { 00:04:59.891 "code": -32603, 00:04:59.891 "message": "Failed to claim CPU core: 2" 00:04:59.891 } 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2504216 /var/tmp/spdk.sock 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2504216 ']' 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:59.891 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.148 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2504417 /var/tmp/spdk2.sock 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2504417 ']' 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:00.149 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:00.406 00:05:00.406 real 0m1.716s 00:05:00.406 user 0m0.837s 00:05:00.406 sys 0m0.138s 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.406 10:33:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.406 ************************************ 00:05:00.406 END TEST locking_overlapped_coremask_via_rpc 00:05:00.406 ************************************ 00:05:00.406 10:33:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:00.406 10:33:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2504216 ]] 00:05:00.406 10:33:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2504216 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2504216 ']' 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2504216 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2504216 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:00.406 10:33:27 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2504216' 00:05:00.407 killing process with pid 2504216 00:05:00.407 10:33:27 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2504216 00:05:00.407 10:33:27 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2504216 00:05:00.665 10:33:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2504417 ]] 00:05:00.665 10:33:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2504417 00:05:00.665 10:33:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2504417 ']' 00:05:00.665 10:33:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2504417 00:05:00.665 10:33:28 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:00.665 10:33:28 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:00.665 10:33:28 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2504417 00:05:00.922 10:33:28 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:00.922 10:33:28 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:00.922 10:33:28 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2504417' 00:05:00.922 killing process with pid 2504417 00:05:00.922 10:33:28 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2504417 00:05:00.922 10:33:28 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2504417 00:05:01.180 10:33:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:01.180 10:33:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:01.180 10:33:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2504216 ]] 00:05:01.180 10:33:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2504216 00:05:01.180 10:33:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2504216 ']' 00:05:01.180 10:33:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2504216 00:05:01.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2504216) - No such process 00:05:01.180 10:33:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2504216 is not found' 00:05:01.180 Process with pid 2504216 is not found 00:05:01.180 10:33:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2504417 ]] 00:05:01.181 10:33:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2504417 00:05:01.181 10:33:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2504417 ']' 00:05:01.181 10:33:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2504417 00:05:01.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2504417) - No such process 00:05:01.181 10:33:28 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2504417 is not found' 00:05:01.181 Process with pid 2504417 is not found 00:05:01.181 10:33:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:01.181 00:05:01.181 real 0m13.532s 00:05:01.181 user 0m24.054s 00:05:01.181 sys 0m4.607s 00:05:01.181 10:33:28 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.181 10:33:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 ************************************ 00:05:01.181 END TEST cpu_locks 00:05:01.181 ************************************ 00:05:01.181 00:05:01.181 real 0m38.009s 00:05:01.181 user 1m12.909s 00:05:01.181 sys 0m8.052s 00:05:01.181 10:33:28 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.181 10:33:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 ************************************ 00:05:01.181 END TEST event 00:05:01.181 ************************************ 00:05:01.181 10:33:28 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:01.181 10:33:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:01.181 10:33:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.181 10:33:28 -- common/autotest_common.sh@10 -- # set +x 00:05:01.181 ************************************ 00:05:01.181 START TEST thread 00:05:01.181 ************************************ 00:05:01.181 10:33:28 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:01.181 * Looking for test storage... 00:05:01.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:01.181 10:33:28 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:01.181 10:33:28 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:01.181 10:33:28 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:01.440 10:33:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.440 10:33:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.440 10:33:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.440 10:33:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.440 10:33:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.440 10:33:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.440 10:33:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.440 10:33:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.440 10:33:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.440 10:33:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.440 10:33:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.440 10:33:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:01.440 10:33:28 thread -- scripts/common.sh@345 -- # : 1 00:05:01.440 10:33:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.440 10:33:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.440 10:33:28 thread -- scripts/common.sh@365 -- # decimal 1 00:05:01.440 10:33:28 thread -- scripts/common.sh@353 -- # local d=1 00:05:01.440 10:33:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.440 10:33:28 thread -- scripts/common.sh@355 -- # echo 1 00:05:01.440 10:33:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.440 10:33:28 thread -- scripts/common.sh@366 -- # decimal 2 00:05:01.440 10:33:28 thread -- scripts/common.sh@353 -- # local d=2 00:05:01.440 10:33:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.440 10:33:28 thread -- scripts/common.sh@355 -- # echo 2 00:05:01.440 10:33:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.440 10:33:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.440 10:33:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.440 10:33:28 thread -- scripts/common.sh@368 -- # return 0 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:01.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.440 --rc genhtml_branch_coverage=1 00:05:01.440 --rc genhtml_function_coverage=1 00:05:01.440 --rc genhtml_legend=1 00:05:01.440 --rc geninfo_all_blocks=1 00:05:01.440 --rc geninfo_unexecuted_blocks=1 00:05:01.440 00:05:01.440 ' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:01.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.440 --rc genhtml_branch_coverage=1 00:05:01.440 --rc genhtml_function_coverage=1 00:05:01.440 --rc genhtml_legend=1 00:05:01.440 --rc geninfo_all_blocks=1 00:05:01.440 --rc geninfo_unexecuted_blocks=1 00:05:01.440 00:05:01.440 ' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:01.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.440 --rc genhtml_branch_coverage=1 00:05:01.440 --rc genhtml_function_coverage=1 00:05:01.440 --rc genhtml_legend=1 00:05:01.440 --rc geninfo_all_blocks=1 00:05:01.440 --rc geninfo_unexecuted_blocks=1 00:05:01.440 00:05:01.440 ' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:01.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.440 --rc genhtml_branch_coverage=1 00:05:01.440 --rc genhtml_function_coverage=1 00:05:01.440 --rc genhtml_legend=1 00:05:01.440 --rc geninfo_all_blocks=1 00:05:01.440 --rc geninfo_unexecuted_blocks=1 00:05:01.440 00:05:01.440 ' 00:05:01.440 10:33:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.440 10:33:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.440 ************************************ 00:05:01.440 START TEST thread_poller_perf 00:05:01.440 ************************************ 00:05:01.440 10:33:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:01.440 [2024-11-07 10:33:28.970019] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:01.440 [2024-11-07 10:33:28.970091] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2504794 ] 00:05:01.440 [2024-11-07 10:33:29.035971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.440 [2024-11-07 10:33:29.076575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.440 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:02.815 [2024-11-07T09:33:30.486Z] ====================================== 00:05:02.815 [2024-11-07T09:33:30.486Z] busy:2305029146 (cyc) 00:05:02.815 [2024-11-07T09:33:30.486Z] total_run_count: 410000 00:05:02.815 [2024-11-07T09:33:30.486Z] tsc_hz: 2300000000 (cyc) 00:05:02.815 [2024-11-07T09:33:30.486Z] ====================================== 00:05:02.815 [2024-11-07T09:33:30.486Z] poller_cost: 5622 (cyc), 2444 (nsec) 00:05:02.815 00:05:02.815 real 0m1.176s 00:05:02.815 user 0m1.103s 00:05:02.815 sys 0m0.069s 00:05:02.815 10:33:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.815 10:33:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.815 ************************************ 00:05:02.815 END TEST thread_poller_perf 00:05:02.815 ************************************ 00:05:02.815 10:33:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.815 10:33:30 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:02.815 10:33:30 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.815 10:33:30 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.815 ************************************ 00:05:02.815 START TEST thread_poller_perf 00:05:02.815 ************************************ 00:05:02.815 10:33:30 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.815 [2024-11-07 10:33:30.215030] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:02.816 [2024-11-07 10:33:30.215091] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505044 ] 00:05:02.816 [2024-11-07 10:33:30.282795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.816 [2024-11-07 10:33:30.323634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.816 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:03.751 [2024-11-07T09:33:31.422Z] ====================================== 00:05:03.751 [2024-11-07T09:33:31.422Z] busy:2301605244 (cyc) 00:05:03.751 [2024-11-07T09:33:31.422Z] total_run_count: 5087000 00:05:03.751 [2024-11-07T09:33:31.422Z] tsc_hz: 2300000000 (cyc) 00:05:03.751 [2024-11-07T09:33:31.422Z] ====================================== 00:05:03.751 [2024-11-07T09:33:31.422Z] poller_cost: 452 (cyc), 196 (nsec) 00:05:03.751 00:05:03.751 real 0m1.168s 00:05:03.751 user 0m1.097s 00:05:03.751 sys 0m0.067s 00:05:03.751 10:33:31 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.751 10:33:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.751 ************************************ 00:05:03.751 END TEST thread_poller_perf 00:05:03.751 ************************************ 00:05:03.751 10:33:31 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:03.751 00:05:03.751 real 0m2.629s 00:05:03.751 user 0m2.357s 00:05:03.751 sys 0m0.282s 00:05:03.751 10:33:31 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.751 10:33:31 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.751 ************************************ 00:05:03.751 END TEST thread 00:05:03.751 ************************************ 00:05:04.010 10:33:31 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:04.010 10:33:31 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:04.010 10:33:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.010 10:33:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.010 10:33:31 -- common/autotest_common.sh@10 -- # set +x 00:05:04.010 ************************************ 00:05:04.010 START TEST app_cmdline 00:05:04.010 ************************************ 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:04.010 * Looking for test storage... 00:05:04.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.010 10:33:31 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:04.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.010 --rc genhtml_branch_coverage=1 00:05:04.010 --rc genhtml_function_coverage=1 00:05:04.010 --rc genhtml_legend=1 00:05:04.010 --rc geninfo_all_blocks=1 00:05:04.010 --rc geninfo_unexecuted_blocks=1 00:05:04.010 00:05:04.010 ' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:04.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.010 --rc genhtml_branch_coverage=1 00:05:04.010 --rc genhtml_function_coverage=1 00:05:04.010 --rc genhtml_legend=1 00:05:04.010 --rc geninfo_all_blocks=1 00:05:04.010 --rc geninfo_unexecuted_blocks=1 00:05:04.010 00:05:04.010 ' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:04.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.010 --rc genhtml_branch_coverage=1 00:05:04.010 --rc genhtml_function_coverage=1 00:05:04.010 --rc genhtml_legend=1 00:05:04.010 --rc geninfo_all_blocks=1 00:05:04.010 --rc geninfo_unexecuted_blocks=1 00:05:04.010 00:05:04.010 ' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:04.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.010 --rc genhtml_branch_coverage=1 00:05:04.010 --rc genhtml_function_coverage=1 00:05:04.010 --rc genhtml_legend=1 00:05:04.010 --rc geninfo_all_blocks=1 00:05:04.010 --rc geninfo_unexecuted_blocks=1 00:05:04.010 00:05:04.010 ' 00:05:04.010 10:33:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:04.010 10:33:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2505345 00:05:04.010 10:33:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2505345 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2505345 ']' 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.010 10:33:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.010 10:33:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.269 [2024-11-07 10:33:31.685004] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:04.269 [2024-11-07 10:33:31.685054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2505345 ] 00:05:04.269 [2024-11-07 10:33:31.747468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.269 [2024-11-07 10:33:31.789955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.528 10:33:31 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.528 10:33:31 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:04.528 10:33:31 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:04.528 { 00:05:04.528 "version": "SPDK v25.01-pre git sha1 899af6c35", 00:05:04.528 "fields": { 00:05:04.528 "major": 25, 00:05:04.528 "minor": 1, 00:05:04.528 "patch": 0, 00:05:04.528 "suffix": "-pre", 00:05:04.528 "commit": "899af6c35" 00:05:04.528 } 00:05:04.528 } 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:04.528 10:33:32 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.528 10:33:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:04.528 10:33:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:04.785 10:33:32 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.785 10:33:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:04.785 10:33:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:04.786 10:33:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.786 request: 00:05:04.786 { 00:05:04.786 "method": "env_dpdk_get_mem_stats", 00:05:04.786 "req_id": 1 00:05:04.786 } 00:05:04.786 Got JSON-RPC error response 00:05:04.786 response: 00:05:04.786 { 00:05:04.786 "code": -32601, 00:05:04.786 "message": "Method not found" 00:05:04.786 } 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.786 10:33:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2505345 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2505345 ']' 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2505345 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.786 10:33:32 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2505345 00:05:05.044 10:33:32 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.044 10:33:32 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.044 10:33:32 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2505345' 00:05:05.044 killing process with pid 2505345 00:05:05.044 10:33:32 app_cmdline -- common/autotest_common.sh@971 -- # kill 2505345 00:05:05.044 10:33:32 app_cmdline -- common/autotest_common.sh@976 -- # wait 2505345 00:05:05.303 00:05:05.303 real 0m1.309s 00:05:05.303 user 0m1.539s 00:05:05.303 sys 0m0.432s 00:05:05.303 10:33:32 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.303 10:33:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:05.303 ************************************ 00:05:05.303 END TEST app_cmdline 00:05:05.303 ************************************ 00:05:05.303 10:33:32 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.303 10:33:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.303 10:33:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.303 10:33:32 -- common/autotest_common.sh@10 -- # set +x 00:05:05.303 ************************************ 00:05:05.303 START TEST version 00:05:05.303 ************************************ 00:05:05.303 10:33:32 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.303 * Looking for test storage... 00:05:05.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:05.303 10:33:32 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.303 10:33:32 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.303 10:33:32 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.563 10:33:33 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.563 10:33:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.563 10:33:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.563 10:33:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.563 10:33:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.563 10:33:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.563 10:33:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.563 10:33:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.563 10:33:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.563 10:33:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.563 10:33:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.563 10:33:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.563 10:33:33 version -- scripts/common.sh@344 -- # case "$op" in 00:05:05.563 10:33:33 version -- scripts/common.sh@345 -- # : 1 00:05:05.563 10:33:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.563 10:33:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.563 10:33:33 version -- scripts/common.sh@365 -- # decimal 1 00:05:05.563 10:33:33 version -- scripts/common.sh@353 -- # local d=1 00:05:05.563 10:33:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.563 10:33:33 version -- scripts/common.sh@355 -- # echo 1 00:05:05.563 10:33:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.563 10:33:33 version -- scripts/common.sh@366 -- # decimal 2 00:05:05.563 10:33:33 version -- scripts/common.sh@353 -- # local d=2 00:05:05.563 10:33:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.563 10:33:33 version -- scripts/common.sh@355 -- # echo 2 00:05:05.563 10:33:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.563 10:33:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.563 10:33:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.563 10:33:33 version -- scripts/common.sh@368 -- # return 0 00:05:05.563 10:33:33 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.563 10:33:33 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.563 --rc genhtml_branch_coverage=1 00:05:05.563 --rc genhtml_function_coverage=1 00:05:05.563 --rc genhtml_legend=1 00:05:05.563 --rc geninfo_all_blocks=1 00:05:05.563 --rc geninfo_unexecuted_blocks=1 00:05:05.563 00:05:05.563 ' 00:05:05.563 10:33:33 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.563 --rc genhtml_branch_coverage=1 00:05:05.563 --rc genhtml_function_coverage=1 00:05:05.563 --rc genhtml_legend=1 00:05:05.563 --rc geninfo_all_blocks=1 00:05:05.563 --rc geninfo_unexecuted_blocks=1 00:05:05.563 00:05:05.563 ' 00:05:05.563 10:33:33 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.563 --rc genhtml_branch_coverage=1 00:05:05.563 --rc genhtml_function_coverage=1 00:05:05.564 --rc genhtml_legend=1 00:05:05.564 --rc geninfo_all_blocks=1 00:05:05.564 --rc geninfo_unexecuted_blocks=1 00:05:05.564 00:05:05.564 ' 00:05:05.564 10:33:33 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.564 --rc genhtml_branch_coverage=1 00:05:05.564 --rc genhtml_function_coverage=1 00:05:05.564 --rc genhtml_legend=1 00:05:05.564 --rc geninfo_all_blocks=1 00:05:05.564 --rc geninfo_unexecuted_blocks=1 00:05:05.564 00:05:05.564 ' 00:05:05.564 10:33:33 version -- app/version.sh@17 -- # get_header_version major 00:05:05.564 10:33:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # cut -f2 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.564 10:33:33 version -- app/version.sh@17 -- # major=25 00:05:05.564 10:33:33 version -- app/version.sh@18 -- # get_header_version minor 00:05:05.564 10:33:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # cut -f2 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.564 10:33:33 version -- app/version.sh@18 -- # minor=1 00:05:05.564 10:33:33 version -- app/version.sh@19 -- # get_header_version patch 00:05:05.564 10:33:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # cut -f2 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.564 10:33:33 version -- app/version.sh@19 -- # patch=0 00:05:05.564 10:33:33 version -- app/version.sh@20 -- # get_header_version suffix 00:05:05.564 10:33:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # cut -f2 00:05:05.564 10:33:33 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.564 10:33:33 version -- app/version.sh@20 -- # suffix=-pre 00:05:05.564 10:33:33 version -- app/version.sh@22 -- # version=25.1 00:05:05.564 10:33:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:05.564 10:33:33 version -- app/version.sh@28 -- # version=25.1rc0 00:05:05.564 10:33:33 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:05.564 10:33:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:05.564 10:33:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:05.564 10:33:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:05.564 00:05:05.564 real 0m0.231s 00:05:05.564 user 0m0.144s 00:05:05.564 sys 0m0.128s 00:05:05.564 10:33:33 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.564 10:33:33 version -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 ************************************ 00:05:05.564 END TEST version 00:05:05.564 ************************************ 00:05:05.564 10:33:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:05.564 10:33:33 -- spdk/autotest.sh@194 -- # uname -s 00:05:05.564 10:33:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:05.564 10:33:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.564 10:33:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.564 10:33:33 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:05.564 10:33:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.564 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 10:33:33 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:05.564 10:33:33 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:05.564 10:33:33 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:05.564 10:33:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:05.564 10:33:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.564 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:05:05.564 ************************************ 00:05:05.564 START TEST nvmf_tcp 00:05:05.564 ************************************ 00:05:05.564 10:33:33 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:05.823 * Looking for test storage... 00:05:05.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.823 10:33:33 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:05.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.823 --rc genhtml_branch_coverage=1 00:05:05.823 --rc genhtml_function_coverage=1 00:05:05.823 --rc genhtml_legend=1 00:05:05.823 --rc geninfo_all_blocks=1 00:05:05.823 --rc geninfo_unexecuted_blocks=1 00:05:05.823 00:05:05.823 ' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:05.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.823 --rc genhtml_branch_coverage=1 00:05:05.823 --rc genhtml_function_coverage=1 00:05:05.823 --rc genhtml_legend=1 00:05:05.823 --rc geninfo_all_blocks=1 00:05:05.823 --rc geninfo_unexecuted_blocks=1 00:05:05.823 00:05:05.823 ' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:05.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.823 --rc genhtml_branch_coverage=1 00:05:05.823 --rc genhtml_function_coverage=1 00:05:05.823 --rc genhtml_legend=1 00:05:05.823 --rc geninfo_all_blocks=1 00:05:05.823 --rc geninfo_unexecuted_blocks=1 00:05:05.823 00:05:05.823 ' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:05.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.823 --rc genhtml_branch_coverage=1 00:05:05.823 --rc genhtml_function_coverage=1 00:05:05.823 --rc genhtml_legend=1 00:05:05.823 --rc geninfo_all_blocks=1 00:05:05.823 --rc geninfo_unexecuted_blocks=1 00:05:05.823 00:05:05.823 ' 00:05:05.823 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:05.823 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:05.823 10:33:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.823 10:33:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.823 ************************************ 00:05:05.823 START TEST nvmf_target_core 00:05:05.823 ************************************ 00:05:05.823 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:05.823 * Looking for test storage... 00:05:05.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:05.823 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:05.823 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:05.823 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:06.082 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.083 --rc genhtml_branch_coverage=1 00:05:06.083 --rc genhtml_function_coverage=1 00:05:06.083 --rc genhtml_legend=1 00:05:06.083 --rc geninfo_all_blocks=1 00:05:06.083 --rc geninfo_unexecuted_blocks=1 00:05:06.083 00:05:06.083 ' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.083 --rc genhtml_branch_coverage=1 00:05:06.083 --rc genhtml_function_coverage=1 00:05:06.083 --rc genhtml_legend=1 00:05:06.083 --rc geninfo_all_blocks=1 00:05:06.083 --rc geninfo_unexecuted_blocks=1 00:05:06.083 00:05:06.083 ' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.083 --rc genhtml_branch_coverage=1 00:05:06.083 --rc genhtml_function_coverage=1 00:05:06.083 --rc genhtml_legend=1 00:05:06.083 --rc geninfo_all_blocks=1 00:05:06.083 --rc geninfo_unexecuted_blocks=1 00:05:06.083 00:05:06.083 ' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.083 --rc genhtml_branch_coverage=1 00:05:06.083 --rc genhtml_function_coverage=1 00:05:06.083 --rc genhtml_legend=1 00:05:06.083 --rc geninfo_all_blocks=1 00:05:06.083 --rc geninfo_unexecuted_blocks=1 00:05:06.083 00:05:06.083 ' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:06.083 ************************************ 00:05:06.083 START TEST nvmf_abort 00:05:06.083 ************************************ 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:06.083 * Looking for test storage... 00:05:06.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.083 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.343 --rc genhtml_branch_coverage=1 00:05:06.343 --rc genhtml_function_coverage=1 00:05:06.343 --rc genhtml_legend=1 00:05:06.343 --rc geninfo_all_blocks=1 00:05:06.343 --rc geninfo_unexecuted_blocks=1 00:05:06.343 00:05:06.343 ' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.343 --rc genhtml_branch_coverage=1 00:05:06.343 --rc genhtml_function_coverage=1 00:05:06.343 --rc genhtml_legend=1 00:05:06.343 --rc geninfo_all_blocks=1 00:05:06.343 --rc geninfo_unexecuted_blocks=1 00:05:06.343 00:05:06.343 ' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.343 --rc genhtml_branch_coverage=1 00:05:06.343 --rc genhtml_function_coverage=1 00:05:06.343 --rc genhtml_legend=1 00:05:06.343 --rc geninfo_all_blocks=1 00:05:06.343 --rc geninfo_unexecuted_blocks=1 00:05:06.343 00:05:06.343 ' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.343 --rc genhtml_branch_coverage=1 00:05:06.343 --rc genhtml_function_coverage=1 00:05:06.343 --rc genhtml_legend=1 00:05:06.343 --rc geninfo_all_blocks=1 00:05:06.343 --rc geninfo_unexecuted_blocks=1 00:05:06.343 00:05:06.343 ' 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.343 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:06.344 10:33:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:12.910 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:12.910 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.910 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:12.911 Found net devices under 0000:86:00.0: cvl_0_0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:12.911 Found net devices under 0000:86:00.1: cvl_0_1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:12.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:12.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:05:12.911 00:05:12.911 --- 10.0.0.2 ping statistics --- 00:05:12.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.911 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:12.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:12.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:05:12.911 00:05:12.911 --- 10.0.0.1 ping statistics --- 00:05:12.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:12.911 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2509020 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2509020 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2509020 ']' 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 [2024-11-07 10:33:39.749614] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:12.911 [2024-11-07 10:33:39.749656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:12.911 [2024-11-07 10:33:39.815392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.911 [2024-11-07 10:33:39.856895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:12.911 [2024-11-07 10:33:39.856935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:12.911 [2024-11-07 10:33:39.856943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.911 [2024-11-07 10:33:39.856949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.911 [2024-11-07 10:33:39.856953] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:12.911 [2024-11-07 10:33:39.858300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.911 [2024-11-07 10:33:39.858388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.911 [2024-11-07 10:33:39.858390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.911 10:33:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 [2024-11-07 10:33:40.002302] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 Malloc0 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 Delay0 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.911 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.912 [2024-11-07 10:33:40.073380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.912 10:33:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:12.912 [2024-11-07 10:33:40.159289] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:14.812 [2024-11-07 10:33:42.229490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1406db0 is same with the state(6) to be set 00:05:14.812 Initializing NVMe Controllers 00:05:14.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:14.812 controller IO queue size 128 less than required 00:05:14.812 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:14.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:14.812 Initialization complete. Launching workers. 00:05:14.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36509 00:05:14.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36570, failed to submit 62 00:05:14.812 success 36513, unsuccessful 57, failed 0 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:14.812 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:14.813 rmmod nvme_tcp 00:05:14.813 rmmod nvme_fabrics 00:05:14.813 rmmod nvme_keyring 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2509020 ']' 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2509020 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2509020 ']' 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2509020 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2509020 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2509020' 00:05:14.813 killing process with pid 2509020 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2509020 00:05:14.813 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2509020 00:05:15.071 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:15.072 10:33:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:16.975 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:16.975 00:05:16.975 real 0m11.033s 00:05:16.975 user 0m11.384s 00:05:16.975 sys 0m5.347s 00:05:16.975 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.975 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:16.975 ************************************ 00:05:16.975 END TEST nvmf_abort 00:05:16.975 ************************************ 00:05:17.234 10:33:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.234 10:33:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:17.235 ************************************ 00:05:17.235 START TEST nvmf_ns_hotplug_stress 00:05:17.235 ************************************ 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:17.235 * Looking for test storage... 00:05:17.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.235 --rc genhtml_branch_coverage=1 00:05:17.235 --rc genhtml_function_coverage=1 00:05:17.235 --rc genhtml_legend=1 00:05:17.235 --rc geninfo_all_blocks=1 00:05:17.235 --rc geninfo_unexecuted_blocks=1 00:05:17.235 00:05:17.235 ' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.235 --rc genhtml_branch_coverage=1 00:05:17.235 --rc genhtml_function_coverage=1 00:05:17.235 --rc genhtml_legend=1 00:05:17.235 --rc geninfo_all_blocks=1 00:05:17.235 --rc geninfo_unexecuted_blocks=1 00:05:17.235 00:05:17.235 ' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.235 --rc genhtml_branch_coverage=1 00:05:17.235 --rc genhtml_function_coverage=1 00:05:17.235 --rc genhtml_legend=1 00:05:17.235 --rc geninfo_all_blocks=1 00:05:17.235 --rc geninfo_unexecuted_blocks=1 00:05:17.235 00:05:17.235 ' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.235 --rc genhtml_branch_coverage=1 00:05:17.235 --rc genhtml_function_coverage=1 00:05:17.235 --rc genhtml_legend=1 00:05:17.235 --rc geninfo_all_blocks=1 00:05:17.235 --rc geninfo_unexecuted_blocks=1 00:05:17.235 00:05:17.235 ' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.235 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:17.236 10:33:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:22.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:22.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:22.503 Found net devices under 0000:86:00.0: cvl_0_0 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:22.503 Found net devices under 0000:86:00.1: cvl_0_1 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:22.503 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:22.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:22.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:05:22.504 00:05:22.504 --- 10.0.0.2 ping statistics --- 00:05:22.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:22.504 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:22.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:22.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:05:22.504 00:05:22.504 --- 10.0.0.1 ping statistics --- 00:05:22.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:22.504 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2512835 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2512835 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2512835 ']' 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.504 10:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:22.504 [2024-11-07 10:33:49.902884] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:05:22.504 [2024-11-07 10:33:49.902934] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:22.504 [2024-11-07 10:33:49.970478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.504 [2024-11-07 10:33:50.016624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:22.504 [2024-11-07 10:33:50.016671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:22.504 [2024-11-07 10:33:50.016682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:22.504 [2024-11-07 10:33:50.016691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:22.504 [2024-11-07 10:33:50.016697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:22.504 [2024-11-07 10:33:50.018472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.504 [2024-11-07 10:33:50.018557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.504 [2024-11-07 10:33:50.018561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:22.504 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:22.762 [2024-11-07 10:33:50.324395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.762 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:23.021 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:23.279 [2024-11-07 10:33:50.717822] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:23.279 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:23.279 10:33:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:23.537 Malloc0 00:05:23.537 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:23.804 Delay0 00:05:23.804 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.064 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:24.064 NULL1 00:05:24.322 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:24.322 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:24.322 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2513305 00:05:24.322 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:24.322 10:33:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.580 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.838 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:24.838 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:25.096 true 00:05:25.096 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:25.096 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.354 10:33:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.354 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:25.354 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:25.611 true 00:05:25.611 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:25.611 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.869 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.127 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:26.127 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:26.385 true 00:05:26.385 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:26.385 10:33:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.643 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.643 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:26.644 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:26.901 true 00:05:26.901 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:26.901 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.159 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.417 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:27.417 10:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:27.675 true 00:05:27.675 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:27.675 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.675 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.933 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:27.933 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:28.191 true 00:05:28.191 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:28.191 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.449 10:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.707 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:28.707 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:28.707 true 00:05:28.965 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:28.965 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.965 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.223 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:29.223 10:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:29.481 true 00:05:29.481 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:29.481 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.739 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.997 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:29.997 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:29.997 true 00:05:30.255 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:30.255 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.255 10:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.513 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:30.513 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:30.770 true 00:05:30.770 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:30.770 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.028 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.286 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:31.286 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:31.286 true 00:05:31.286 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:31.544 10:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.545 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.803 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:31.803 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:32.060 true 00:05:32.060 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:32.060 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.318 10:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.576 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:32.576 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:32.576 true 00:05:32.576 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:32.576 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.834 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.092 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:33.092 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:33.350 true 00:05:33.350 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:33.350 10:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.608 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.865 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:33.865 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:33.865 true 00:05:33.865 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:33.865 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.123 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.381 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:34.381 10:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:34.639 true 00:05:34.639 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:34.639 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.897 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.155 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:35.155 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:35.155 true 00:05:35.155 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:35.155 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.412 10:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.670 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:35.670 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:35.928 true 00:05:35.928 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:35.928 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.186 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.444 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:36.444 10:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:36.444 true 00:05:36.444 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:36.444 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.702 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.960 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:36.960 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:37.218 true 00:05:37.218 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:37.218 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.476 10:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.476 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:37.476 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:37.734 true 00:05:37.734 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:37.734 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.992 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.250 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:38.250 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:38.507 true 00:05:38.507 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:38.507 10:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.507 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.763 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:38.763 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:39.021 true 00:05:39.021 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:39.021 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.279 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.536 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:39.536 10:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:39.536 true 00:05:39.536 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:39.536 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.793 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.051 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:40.051 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:40.309 true 00:05:40.309 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:40.309 10:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.568 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.568 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:40.827 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:40.827 true 00:05:40.827 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:40.827 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.085 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.343 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:41.343 10:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:41.600 true 00:05:41.600 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:41.600 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.858 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.859 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:41.859 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:42.116 true 00:05:42.116 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:42.116 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.374 10:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.632 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:42.632 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:42.632 true 00:05:42.632 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:42.632 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.890 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.148 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:43.148 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:43.417 true 00:05:43.417 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:43.417 10:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.717 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.991 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:43.991 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:43.991 true 00:05:43.991 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:43.991 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.260 10:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.518 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:44.518 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:44.776 true 00:05:44.776 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:44.776 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.776 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.034 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:45.034 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:45.292 true 00:05:45.292 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:45.292 10:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.549 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.807 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:45.807 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:46.065 true 00:05:46.065 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:46.065 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.323 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.323 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:46.323 10:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:46.581 true 00:05:46.581 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:46.581 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.840 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.098 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:47.098 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:47.356 true 00:05:47.356 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:47.356 10:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.614 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.614 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:47.614 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:47.872 true 00:05:47.872 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:47.872 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.130 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.388 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:48.388 10:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:48.388 true 00:05:48.646 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:48.646 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.646 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.904 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:48.904 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:49.161 true 00:05:49.162 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:49.162 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.420 10:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.678 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:49.678 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:49.678 true 00:05:49.936 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:49.936 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.936 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.194 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:50.194 10:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:50.452 true 00:05:50.452 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:50.452 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.709 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.967 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:50.967 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:51.225 true 00:05:51.225 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:51.225 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.225 10:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.482 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:51.482 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:51.740 true 00:05:51.740 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:51.740 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.997 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.254 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:52.254 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:52.254 true 00:05:52.512 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:52.512 10:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.512 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.769 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:52.769 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:53.028 true 00:05:53.028 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:53.028 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.286 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.544 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:53.544 10:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:53.544 true 00:05:53.544 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:53.544 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.803 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.061 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:54.061 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:54.318 true 00:05:54.318 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:54.318 10:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.577 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.577 Initializing NVMe Controllers 00:05:54.577 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:54.577 Controller IO queue size 128, less than required. 00:05:54.577 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:54.577 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:54.577 Initialization complete. Launching workers. 00:05:54.577 ======================================================== 00:05:54.577 Latency(us) 00:05:54.577 Device Information : IOPS MiB/s Average min max 00:05:54.577 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26730.47 13.05 4788.54 2806.80 8115.63 00:05:54.577 ======================================================== 00:05:54.577 Total : 26730.47 13.05 4788.54 2806.80 8115.63 00:05:54.577 00:05:54.835 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:54.835 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:54.835 true 00:05:54.835 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2513305 00:05:54.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2513305) - No such process 00:05:54.835 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2513305 00:05:54.835 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.093 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.351 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:55.351 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:55.351 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:55.351 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.351 10:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:55.608 null0 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:55.608 null1 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.608 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:55.866 null2 00:05:55.866 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:55.866 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:55.866 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:56.124 null3 00:05:56.124 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.124 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.124 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:56.381 null4 00:05:56.381 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.381 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.381 10:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:56.381 null5 00:05:56.382 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.382 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.382 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:56.639 null6 00:05:56.639 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.639 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.639 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:56.936 null7 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2518965 2518966 2518969 2518970 2518972 2518974 2518976 2518978 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:56.936 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:56.937 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.937 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.195 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.454 10:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.454 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.712 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.969 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.227 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.485 10:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.485 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.485 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.486 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.743 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.744 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.002 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.260 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.517 10:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:59.517 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.517 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.775 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:59.776 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.034 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.292 10:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.550 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:00.807 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.807 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.807 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:00.807 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.807 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:00.808 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:01.066 rmmod nvme_tcp 00:06:01.066 rmmod nvme_fabrics 00:06:01.066 rmmod nvme_keyring 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2512835 ']' 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2512835 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2512835 ']' 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2512835 00:06:01.066 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2512835 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2512835' 00:06:01.325 killing process with pid 2512835 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2512835 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2512835 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:01.325 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.326 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.326 10:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:03.858 00:06:03.858 real 0m46.313s 00:06:03.858 user 3m21.592s 00:06:03.858 sys 0m16.165s 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:03.858 ************************************ 00:06:03.858 END TEST nvmf_ns_hotplug_stress 00:06:03.858 ************************************ 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:03.858 ************************************ 00:06:03.858 START TEST nvmf_delete_subsystem 00:06:03.858 ************************************ 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:03.858 * Looking for test storage... 00:06:03.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.858 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.859 --rc genhtml_branch_coverage=1 00:06:03.859 --rc genhtml_function_coverage=1 00:06:03.859 --rc genhtml_legend=1 00:06:03.859 --rc geninfo_all_blocks=1 00:06:03.859 --rc geninfo_unexecuted_blocks=1 00:06:03.859 00:06:03.859 ' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.859 --rc genhtml_branch_coverage=1 00:06:03.859 --rc genhtml_function_coverage=1 00:06:03.859 --rc genhtml_legend=1 00:06:03.859 --rc geninfo_all_blocks=1 00:06:03.859 --rc geninfo_unexecuted_blocks=1 00:06:03.859 00:06:03.859 ' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.859 --rc genhtml_branch_coverage=1 00:06:03.859 --rc genhtml_function_coverage=1 00:06:03.859 --rc genhtml_legend=1 00:06:03.859 --rc geninfo_all_blocks=1 00:06:03.859 --rc geninfo_unexecuted_blocks=1 00:06:03.859 00:06:03.859 ' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.859 --rc genhtml_branch_coverage=1 00:06:03.859 --rc genhtml_function_coverage=1 00:06:03.859 --rc genhtml_legend=1 00:06:03.859 --rc geninfo_all_blocks=1 00:06:03.859 --rc geninfo_unexecuted_blocks=1 00:06:03.859 00:06:03.859 ' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:03.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:03.859 10:34:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:09.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:09.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.125 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:09.125 Found net devices under 0000:86:00.0: cvl_0_0 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:09.126 Found net devices under 0000:86:00.1: cvl_0_1 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.126 10:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:09.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:06:09.126 00:06:09.126 --- 10.0.0.2 ping statistics --- 00:06:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.126 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:06:09.126 00:06:09.126 --- 10.0.0.1 ping statistics --- 00:06:09.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.126 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2523142 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2523142 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2523142 ']' 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:09.126 [2024-11-07 10:34:36.255534] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:09.126 [2024-11-07 10:34:36.255583] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.126 [2024-11-07 10:34:36.322263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.126 [2024-11-07 10:34:36.364002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.126 [2024-11-07 10:34:36.364039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.126 [2024-11-07 10:34:36.364047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.126 [2024-11-07 10:34:36.364053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.126 [2024-11-07 10:34:36.364058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.126 [2024-11-07 10:34:36.365266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.126 [2024-11-07 10:34:36.365269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 [2024-11-07 10:34:36.497658] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.126 [2024-11-07 10:34:36.513862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.126 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 NULL1 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 Delay0 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2523295 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:09.127 10:34:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:09.127 [2024-11-07 10:34:36.598554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:11.026 10:34:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:11.026 10:34:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.026 10:34:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 starting I/O failed: -6 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 [2024-11-07 10:34:38.850372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c680 is same with the state(6) to be set 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 Write completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.285 Read completed with error (sct=0, sc=8) 00:06:11.285 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Write completed with error (sct=0, sc=8) 00:06:11.286 starting I/O failed: -6 00:06:11.286 Read completed with error (sct=0, sc=8) 00:06:11.286 [2024-11-07 10:34:38.850838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff770000c40 is same with the state(6) to be set 00:06:12.219 [2024-11-07 10:34:39.816872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214d9b0 is same with the state(6) to be set 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 [2024-11-07 10:34:39.851423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c2c0 is same with the state(6) to be set 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 [2024-11-07 10:34:39.851680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff77000d350 is same with the state(6) to be set 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 [2024-11-07 10:34:39.851852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c4a0 is same with the state(6) to be set 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Write completed with error (sct=0, sc=8) 00:06:12.219 Read completed with error (sct=0, sc=8) 00:06:12.219 [2024-11-07 10:34:39.852587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c860 is same with the state(6) to be set 00:06:12.219 Initializing NVMe Controllers 00:06:12.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:12.219 Controller IO queue size 128, less than required. 00:06:12.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:12.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:12.219 Initialization complete. Launching workers. 00:06:12.219 ======================================================== 00:06:12.219 Latency(us) 00:06:12.219 Device Information : IOPS MiB/s Average min max 00:06:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.49 0.10 943529.77 951.42 1012076.37 00:06:12.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.65 0.09 860229.16 364.32 1012013.49 00:06:12.219 ======================================================== 00:06:12.219 Total : 370.14 0.18 904224.38 364.32 1012076.37 00:06:12.219 00:06:12.219 [2024-11-07 10:34:39.853093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214d9b0 (9): Bad file descriptor 00:06:12.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:12.219 10:34:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.219 10:34:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:12.219 10:34:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2523295 00:06:12.219 10:34:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2523295 00:06:12.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2523295) - No such process 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2523295 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2523295 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2523295 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.784 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.785 [2024-11-07 10:34:40.381547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2523856 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:12.785 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.785 [2024-11-07 10:34:40.450886] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:13.351 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.351 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:13.351 10:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:13.917 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.917 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:13.917 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.483 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.483 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:14.483 10:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.049 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.049 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:15.049 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.306 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.306 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:15.306 10:34:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.872 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.872 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:15.872 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.872 Initializing NVMe Controllers 00:06:15.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.872 Controller IO queue size 128, less than required. 00:06:15.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:15.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:15.872 Initialization complete. Launching workers. 00:06:15.872 ======================================================== 00:06:15.872 Latency(us) 00:06:15.872 Device Information : IOPS MiB/s Average min max 00:06:15.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003769.76 1000120.43 1012319.50 00:06:15.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004919.12 1000212.68 1012316.25 00:06:15.872 ======================================================== 00:06:15.872 Total : 256.00 0.12 1004344.44 1000120.43 1012319.50 00:06:15.872 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2523856 00:06:16.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2523856) - No such process 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2523856 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.439 rmmod nvme_tcp 00:06:16.439 rmmod nvme_fabrics 00:06:16.439 rmmod nvme_keyring 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:16.439 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2523142 ']' 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2523142 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2523142 ']' 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2523142 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.440 10:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2523142 00:06:16.440 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.440 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.440 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2523142' 00:06:16.440 killing process with pid 2523142 00:06:16.440 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2523142 00:06:16.440 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2523142 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.699 10:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.602 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:18.602 00:06:18.602 real 0m15.167s 00:06:18.602 user 0m28.903s 00:06:18.602 sys 0m4.697s 00:06:18.602 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.860 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.860 ************************************ 00:06:18.860 END TEST nvmf_delete_subsystem 00:06:18.860 ************************************ 00:06:18.860 10:34:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.860 10:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:18.860 10:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.860 10:34:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:18.860 ************************************ 00:06:18.860 START TEST nvmf_host_management 00:06:18.860 ************************************ 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:18.861 * Looking for test storage... 00:06:18.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.861 --rc genhtml_branch_coverage=1 00:06:18.861 --rc genhtml_function_coverage=1 00:06:18.861 --rc genhtml_legend=1 00:06:18.861 --rc geninfo_all_blocks=1 00:06:18.861 --rc geninfo_unexecuted_blocks=1 00:06:18.861 00:06:18.861 ' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.861 --rc genhtml_branch_coverage=1 00:06:18.861 --rc genhtml_function_coverage=1 00:06:18.861 --rc genhtml_legend=1 00:06:18.861 --rc geninfo_all_blocks=1 00:06:18.861 --rc geninfo_unexecuted_blocks=1 00:06:18.861 00:06:18.861 ' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.861 --rc genhtml_branch_coverage=1 00:06:18.861 --rc genhtml_function_coverage=1 00:06:18.861 --rc genhtml_legend=1 00:06:18.861 --rc geninfo_all_blocks=1 00:06:18.861 --rc geninfo_unexecuted_blocks=1 00:06:18.861 00:06:18.861 ' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.861 --rc genhtml_branch_coverage=1 00:06:18.861 --rc genhtml_function_coverage=1 00:06:18.861 --rc genhtml_legend=1 00:06:18.861 --rc geninfo_all_blocks=1 00:06:18.861 --rc geninfo_unexecuted_blocks=1 00:06:18.861 00:06:18.861 ' 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.861 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.120 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.121 10:34:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:24.393 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:24.394 10:34:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:24.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:24.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:24.394 Found net devices under 0000:86:00.0: cvl_0_0 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:24.394 Found net devices under 0000:86:00.1: cvl_0_1 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:24.394 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:24.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:06:24.653 00:06:24.653 --- 10.0.0.2 ping statistics --- 00:06:24.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.653 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:24.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:06:24.653 00:06:24.653 --- 10.0.0.1 ping statistics --- 00:06:24.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.653 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2528079 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2528079 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2528079 ']' 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.653 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.912 [2024-11-07 10:34:52.333393] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:24.912 [2024-11-07 10:34:52.333443] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.912 [2024-11-07 10:34:52.398482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.912 [2024-11-07 10:34:52.439731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:24.912 [2024-11-07 10:34:52.439773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:24.912 [2024-11-07 10:34:52.439780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.912 [2024-11-07 10:34:52.439786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.912 [2024-11-07 10:34:52.439791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:24.912 [2024-11-07 10:34:52.441334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.912 [2024-11-07 10:34:52.441418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.912 [2024-11-07 10:34:52.441531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.912 [2024-11-07 10:34:52.441531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:24.912 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 [2024-11-07 10:34:52.586058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 Malloc0 00:06:25.170 [2024-11-07 10:34:52.669081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2528127 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2528127 /var/tmp/bdevperf.sock 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2528127 ']' 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:25.170 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:25.170 { 00:06:25.170 "params": { 00:06:25.170 "name": "Nvme$subsystem", 00:06:25.170 "trtype": "$TEST_TRANSPORT", 00:06:25.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:25.170 "adrfam": "ipv4", 00:06:25.170 "trsvcid": "$NVMF_PORT", 00:06:25.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:25.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:25.171 "hdgst": ${hdgst:-false}, 00:06:25.171 "ddgst": ${ddgst:-false} 00:06:25.171 }, 00:06:25.171 "method": "bdev_nvme_attach_controller" 00:06:25.171 } 00:06:25.171 EOF 00:06:25.171 )") 00:06:25.171 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:25.171 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:25.171 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:25.171 10:34:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:25.171 "params": { 00:06:25.171 "name": "Nvme0", 00:06:25.171 "trtype": "tcp", 00:06:25.171 "traddr": "10.0.0.2", 00:06:25.171 "adrfam": "ipv4", 00:06:25.171 "trsvcid": "4420", 00:06:25.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:25.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:25.171 "hdgst": false, 00:06:25.171 "ddgst": false 00:06:25.171 }, 00:06:25.171 "method": "bdev_nvme_attach_controller" 00:06:25.171 }' 00:06:25.171 [2024-11-07 10:34:52.762981] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:25.171 [2024-11-07 10:34:52.763023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528127 ] 00:06:25.171 [2024-11-07 10:34:52.825244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.429 [2024-11-07 10:34:52.867157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.429 Running I/O for 10 seconds... 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.429 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=90 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 90 -ge 100 ']' 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.712 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.970 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.970 [2024-11-07 10:34:53.423570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec200 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.423608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec200 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.423615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec200 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.423622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec200 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.423628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fec200 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.428005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.970 [2024-11-07 10:34:53.428040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.970 [2024-11-07 10:34:53.428059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.970 [2024-11-07 10:34:53.428074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:25.970 [2024-11-07 10:34:53.428088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214d510 is same with the state(6) to be set 00:06:25.970 [2024-11-07 10:34:53.428136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.970 [2024-11-07 10:34:53.428146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.970 [2024-11-07 10:34:53.428167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.970 [2024-11-07 10:34:53.428184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.970 [2024-11-07 10:34:53.428193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.971 [2024-11-07 10:34:53.428555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.971 [2024-11-07 10:34:53.428756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.971 [2024-11-07 10:34:53.428763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:25.972 [2024-11-07 10:34:53.428968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.428989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.428998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 [2024-11-07 10:34:53.429135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:25.972 [2024-11-07 10:34:53.429142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:25.972 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.972 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:25.972 [2024-11-07 10:34:53.430110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:25.972 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:25.972 00:06:25.972 Latency(us) 00:06:25.972 [2024-11-07T09:34:53.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:25.972 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:25.972 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:25.972 Verification LBA range: start 0x0 length 0x400 00:06:25.972 Nvme0n1 : 0.40 1909.69 119.36 159.14 0.00 30088.76 1666.89 27582.11 00:06:25.972 [2024-11-07T09:34:53.643Z] =================================================================================================================== 00:06:25.972 [2024-11-07T09:34:53.643Z] Total : 1909.69 119.36 159.14 0.00 30088.76 1666.89 27582.11 00:06:25.972 [2024-11-07 10:34:53.432492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.972 [2024-11-07 10:34:53.432517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214d510 (9): Bad file descriptor 00:06:25.972 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.972 10:34:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:25.972 [2024-11-07 10:34:53.443851] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2528127 00:06:26.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2528127) - No such process 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:26.904 { 00:06:26.904 "params": { 00:06:26.904 "name": "Nvme$subsystem", 00:06:26.904 "trtype": "$TEST_TRANSPORT", 00:06:26.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:26.904 "adrfam": "ipv4", 00:06:26.904 "trsvcid": "$NVMF_PORT", 00:06:26.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:26.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:26.904 "hdgst": ${hdgst:-false}, 00:06:26.904 "ddgst": ${ddgst:-false} 00:06:26.904 }, 00:06:26.904 "method": "bdev_nvme_attach_controller" 00:06:26.904 } 00:06:26.904 EOF 00:06:26.904 )") 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:26.904 10:34:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:26.904 "params": { 00:06:26.904 "name": "Nvme0", 00:06:26.904 "trtype": "tcp", 00:06:26.904 "traddr": "10.0.0.2", 00:06:26.904 "adrfam": "ipv4", 00:06:26.904 "trsvcid": "4420", 00:06:26.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:26.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:26.904 "hdgst": false, 00:06:26.905 "ddgst": false 00:06:26.905 }, 00:06:26.905 "method": "bdev_nvme_attach_controller" 00:06:26.905 }' 00:06:26.905 [2024-11-07 10:34:54.493656] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:26.905 [2024-11-07 10:34:54.493702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528391 ] 00:06:26.905 [2024-11-07 10:34:54.558464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.162 [2024-11-07 10:34:54.599547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.162 Running I/O for 1 seconds... 00:06:28.538 1931.00 IOPS, 120.69 MiB/s 00:06:28.538 Latency(us) 00:06:28.538 [2024-11-07T09:34:56.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.538 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:28.538 Verification LBA range: start 0x0 length 0x400 00:06:28.538 Nvme0n1 : 1.05 1904.73 119.05 0.00 0.00 31745.80 2792.40 42854.85 00:06:28.538 [2024-11-07T09:34:56.209Z] =================================================================================================================== 00:06:28.538 [2024-11-07T09:34:56.209Z] Total : 1904.73 119.05 0.00 0.00 31745.80 2792.40 42854.85 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.538 10:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.538 rmmod nvme_tcp 00:06:28.538 rmmod nvme_fabrics 00:06:28.538 rmmod nvme_keyring 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2528079 ']' 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2528079 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2528079 ']' 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2528079 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2528079 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2528079' 00:06:28.538 killing process with pid 2528079 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2528079 00:06:28.538 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2528079 00:06:28.829 [2024-11-07 10:34:56.250753] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.829 10:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.785 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.785 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:30.785 00:06:30.785 real 0m12.016s 00:06:30.785 user 0m19.015s 00:06:30.786 sys 0m5.371s 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:30.786 ************************************ 00:06:30.786 END TEST nvmf_host_management 00:06:30.786 ************************************ 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.786 ************************************ 00:06:30.786 START TEST nvmf_lvol 00:06:30.786 ************************************ 00:06:30.786 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:31.045 * Looking for test storage... 00:06:31.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.045 --rc genhtml_branch_coverage=1 00:06:31.045 --rc genhtml_function_coverage=1 00:06:31.045 --rc genhtml_legend=1 00:06:31.045 --rc geninfo_all_blocks=1 00:06:31.045 --rc geninfo_unexecuted_blocks=1 00:06:31.045 00:06:31.045 ' 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.045 --rc genhtml_branch_coverage=1 00:06:31.045 --rc genhtml_function_coverage=1 00:06:31.045 --rc genhtml_legend=1 00:06:31.045 --rc geninfo_all_blocks=1 00:06:31.045 --rc geninfo_unexecuted_blocks=1 00:06:31.045 00:06:31.045 ' 00:06:31.045 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.046 --rc genhtml_branch_coverage=1 00:06:31.046 --rc genhtml_function_coverage=1 00:06:31.046 --rc genhtml_legend=1 00:06:31.046 --rc geninfo_all_blocks=1 00:06:31.046 --rc geninfo_unexecuted_blocks=1 00:06:31.046 00:06:31.046 ' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.046 --rc genhtml_branch_coverage=1 00:06:31.046 --rc genhtml_function_coverage=1 00:06:31.046 --rc genhtml_legend=1 00:06:31.046 --rc geninfo_all_blocks=1 00:06:31.046 --rc geninfo_unexecuted_blocks=1 00:06:31.046 00:06:31.046 ' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.046 10:34:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:36.316 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:36.316 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:36.316 Found net devices under 0000:86:00.0: cvl_0_0 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:36.316 Found net devices under 0000:86:00.1: cvl_0_1 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:36.316 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:36.317 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:36.575 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:36.576 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:36.576 10:35:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:36.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:36.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:06:36.576 00:06:36.576 --- 10.0.0.2 ping statistics --- 00:06:36.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.576 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:36.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:36.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:06:36.576 00:06:36.576 --- 10.0.0.1 ping statistics --- 00:06:36.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:36.576 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2532196 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2532196 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2532196 ']' 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:36.576 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:36.576 [2024-11-07 10:35:04.196388] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:36.576 [2024-11-07 10:35:04.196430] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.833 [2024-11-07 10:35:04.264278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.834 [2024-11-07 10:35:04.306471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:36.834 [2024-11-07 10:35:04.306506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:36.834 [2024-11-07 10:35:04.306513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:36.834 [2024-11-07 10:35:04.306519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:36.834 [2024-11-07 10:35:04.306524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:36.834 [2024-11-07 10:35:04.307931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.834 [2024-11-07 10:35:04.308028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.834 [2024-11-07 10:35:04.308029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:36.834 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.091 [2024-11-07 10:35:04.609107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.091 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:37.349 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:37.349 10:35:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:37.607 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:37.607 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:37.607 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:37.865 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=199f073c-1864-4a6c-a2ca-e64866636f1a 00:06:37.865 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 199f073c-1864-4a6c-a2ca-e64866636f1a lvol 20 00:06:38.122 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a14f7fa5-b4b3-4d2d-89f3-9678aabc9823 00:06:38.122 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.381 10:35:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a14f7fa5-b4b3-4d2d-89f3-9678aabc9823 00:06:38.639 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:38.639 [2024-11-07 10:35:06.215486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.639 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.896 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2532643 00:06:38.896 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:38.897 10:35:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:39.831 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a14f7fa5-b4b3-4d2d-89f3-9678aabc9823 MY_SNAPSHOT 00:06:40.089 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f0259a78-a219-4394-8a64-3621a77752c6 00:06:40.090 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a14f7fa5-b4b3-4d2d-89f3-9678aabc9823 30 00:06:40.348 10:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f0259a78-a219-4394-8a64-3621a77752c6 MY_CLONE 00:06:40.605 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2ee7a4f2-6dc8-45f0-88cf-7d2a4dd710f2 00:06:40.605 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2ee7a4f2-6dc8-45f0-88cf-7d2a4dd710f2 00:06:41.172 10:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2532643 00:06:49.281 Initializing NVMe Controllers 00:06:49.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:49.281 Controller IO queue size 128, less than required. 00:06:49.281 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:49.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:49.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:49.281 Initialization complete. Launching workers. 00:06:49.281 ======================================================== 00:06:49.281 Latency(us) 00:06:49.281 Device Information : IOPS MiB/s Average min max 00:06:49.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11630.46 45.43 11010.85 1606.16 50779.77 00:06:49.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11695.16 45.68 10945.84 3548.96 116951.55 00:06:49.281 ======================================================== 00:06:49.281 Total : 23325.62 91.12 10978.26 1606.16 116951.55 00:06:49.281 00:06:49.281 10:35:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:49.539 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a14f7fa5-b4b3-4d2d-89f3-9678aabc9823 00:06:49.797 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 199f073c-1864-4a6c-a2ca-e64866636f1a 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:50.055 rmmod nvme_tcp 00:06:50.055 rmmod nvme_fabrics 00:06:50.055 rmmod nvme_keyring 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2532196 ']' 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2532196 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2532196 ']' 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2532196 00:06:50.055 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2532196 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2532196' 00:06:50.056 killing process with pid 2532196 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2532196 00:06:50.056 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2532196 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.314 10:35:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:52.847 00:06:52.847 real 0m21.499s 00:06:52.847 user 1m3.244s 00:06:52.847 sys 0m7.113s 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:52.847 ************************************ 00:06:52.847 END TEST nvmf_lvol 00:06:52.847 ************************************ 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.847 ************************************ 00:06:52.847 START TEST nvmf_lvs_grow 00:06:52.847 ************************************ 00:06:52.847 10:35:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:52.847 * Looking for test storage... 00:06:52.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.847 --rc genhtml_branch_coverage=1 00:06:52.847 --rc genhtml_function_coverage=1 00:06:52.847 --rc genhtml_legend=1 00:06:52.847 --rc geninfo_all_blocks=1 00:06:52.847 --rc geninfo_unexecuted_blocks=1 00:06:52.847 00:06:52.847 ' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.847 --rc genhtml_branch_coverage=1 00:06:52.847 --rc genhtml_function_coverage=1 00:06:52.847 --rc genhtml_legend=1 00:06:52.847 --rc geninfo_all_blocks=1 00:06:52.847 --rc geninfo_unexecuted_blocks=1 00:06:52.847 00:06:52.847 ' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.847 --rc genhtml_branch_coverage=1 00:06:52.847 --rc genhtml_function_coverage=1 00:06:52.847 --rc genhtml_legend=1 00:06:52.847 --rc geninfo_all_blocks=1 00:06:52.847 --rc geninfo_unexecuted_blocks=1 00:06:52.847 00:06:52.847 ' 00:06:52.847 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.847 --rc genhtml_branch_coverage=1 00:06:52.848 --rc genhtml_function_coverage=1 00:06:52.848 --rc genhtml_legend=1 00:06:52.848 --rc geninfo_all_blocks=1 00:06:52.848 --rc geninfo_unexecuted_blocks=1 00:06:52.848 00:06:52.848 ' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:52.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:52.848 10:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.119 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.119 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.119 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:06:58.120 00:06:58.120 --- 10.0.0.2 ping statistics --- 00:06:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.120 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:06:58.120 00:06:58.120 --- 10.0.0.1 ping statistics --- 00:06:58.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.120 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2538026 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2538026 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2538026 ']' 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.120 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:58.120 [2024-11-07 10:35:25.597575] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:06:58.120 [2024-11-07 10:35:25.597621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.120 [2024-11-07 10:35:25.662973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.120 [2024-11-07 10:35:25.703975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.120 [2024-11-07 10:35:25.704010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.120 [2024-11-07 10:35:25.704018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.120 [2024-11-07 10:35:25.704024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.120 [2024-11-07 10:35:25.704029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.120 [2024-11-07 10:35:25.704563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.378 10:35:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:58.378 [2024-11-07 10:35:26.004464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.378 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:58.378 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:58.378 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.378 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:58.635 ************************************ 00:06:58.635 START TEST lvs_grow_clean 00:06:58.635 ************************************ 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:58.635 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:58.892 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:58.892 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9c631117-393a-4547-88a9-3aa855954169 00:06:58.892 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:06:58.892 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:59.150 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:59.150 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:59.150 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9c631117-393a-4547-88a9-3aa855954169 lvol 150 00:06:59.408 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 00:06:59.408 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:59.408 10:35:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:59.408 [2024-11-07 10:35:27.043854] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:59.408 [2024-11-07 10:35:27.043905] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:59.408 true 00:06:59.408 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:06:59.408 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:59.666 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:59.666 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:59.923 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 00:07:00.181 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:00.181 [2024-11-07 10:35:27.774075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.181 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2538522 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2538522 /var/tmp/bdevperf.sock 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2538522 ']' 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:00.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:00.439 10:35:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:00.439 [2024-11-07 10:35:28.015293] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:00.439 [2024-11-07 10:35:28.015343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538522 ] 00:07:00.439 [2024-11-07 10:35:28.077076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.697 [2024-11-07 10:35:28.118251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.697 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.697 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:00.697 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:00.955 Nvme0n1 00:07:01.213 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:01.213 [ 00:07:01.213 { 00:07:01.213 "name": "Nvme0n1", 00:07:01.213 "aliases": [ 00:07:01.213 "0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0" 00:07:01.213 ], 00:07:01.213 "product_name": "NVMe disk", 00:07:01.213 "block_size": 4096, 00:07:01.213 "num_blocks": 38912, 00:07:01.213 "uuid": "0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0", 00:07:01.213 "numa_id": 1, 00:07:01.213 "assigned_rate_limits": { 00:07:01.213 "rw_ios_per_sec": 0, 00:07:01.213 "rw_mbytes_per_sec": 0, 00:07:01.213 "r_mbytes_per_sec": 0, 00:07:01.213 "w_mbytes_per_sec": 0 00:07:01.213 }, 00:07:01.213 "claimed": false, 00:07:01.213 "zoned": false, 00:07:01.213 "supported_io_types": { 00:07:01.213 "read": true, 00:07:01.213 "write": true, 00:07:01.213 "unmap": true, 00:07:01.213 "flush": true, 00:07:01.213 "reset": true, 00:07:01.213 "nvme_admin": true, 00:07:01.213 "nvme_io": true, 00:07:01.213 "nvme_io_md": false, 00:07:01.213 "write_zeroes": true, 00:07:01.213 "zcopy": false, 00:07:01.213 "get_zone_info": false, 00:07:01.213 "zone_management": false, 00:07:01.213 "zone_append": false, 00:07:01.213 "compare": true, 00:07:01.213 "compare_and_write": true, 00:07:01.213 "abort": true, 00:07:01.213 "seek_hole": false, 00:07:01.213 "seek_data": false, 00:07:01.213 "copy": true, 00:07:01.213 "nvme_iov_md": false 00:07:01.213 }, 00:07:01.213 "memory_domains": [ 00:07:01.213 { 00:07:01.213 "dma_device_id": "system", 00:07:01.213 "dma_device_type": 1 00:07:01.213 } 00:07:01.213 ], 00:07:01.213 "driver_specific": { 00:07:01.213 "nvme": [ 00:07:01.213 { 00:07:01.213 "trid": { 00:07:01.213 "trtype": "TCP", 00:07:01.213 "adrfam": "IPv4", 00:07:01.213 "traddr": "10.0.0.2", 00:07:01.213 "trsvcid": "4420", 00:07:01.213 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:01.213 }, 00:07:01.213 "ctrlr_data": { 00:07:01.213 "cntlid": 1, 00:07:01.213 "vendor_id": "0x8086", 00:07:01.213 "model_number": "SPDK bdev Controller", 00:07:01.213 "serial_number": "SPDK0", 00:07:01.213 "firmware_revision": "25.01", 00:07:01.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.213 "oacs": { 00:07:01.213 "security": 0, 00:07:01.213 "format": 0, 00:07:01.213 "firmware": 0, 00:07:01.213 "ns_manage": 0 00:07:01.213 }, 00:07:01.213 "multi_ctrlr": true, 00:07:01.213 "ana_reporting": false 00:07:01.213 }, 00:07:01.213 "vs": { 00:07:01.213 "nvme_version": "1.3" 00:07:01.213 }, 00:07:01.213 "ns_data": { 00:07:01.213 "id": 1, 00:07:01.213 "can_share": true 00:07:01.213 } 00:07:01.213 } 00:07:01.213 ], 00:07:01.213 "mp_policy": "active_passive" 00:07:01.213 } 00:07:01.213 } 00:07:01.213 ] 00:07:01.213 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2538541 00:07:01.213 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:01.213 10:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:01.471 Running I/O for 10 seconds... 00:07:02.431 Latency(us) 00:07:02.431 [2024-11-07T09:35:30.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.431 Nvme0n1 : 1.00 22659.00 88.51 0.00 0.00 0.00 0.00 0.00 00:07:02.431 [2024-11-07T09:35:30.102Z] =================================================================================================================== 00:07:02.431 [2024-11-07T09:35:30.102Z] Total : 22659.00 88.51 0.00 0.00 0.00 0.00 0.00 00:07:02.431 00:07:03.364 10:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9c631117-393a-4547-88a9-3aa855954169 00:07:03.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.364 Nvme0n1 : 2.00 22605.00 88.30 0.00 0.00 0.00 0.00 0.00 00:07:03.364 [2024-11-07T09:35:31.035Z] =================================================================================================================== 00:07:03.364 [2024-11-07T09:35:31.035Z] Total : 22605.00 88.30 0.00 0.00 0.00 0.00 0.00 00:07:03.364 00:07:03.364 true 00:07:03.623 10:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:03.623 10:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:03.623 10:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:03.623 10:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:03.623 10:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2538541 00:07:04.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.555 Nvme0n1 : 3.00 22727.67 88.78 0.00 0.00 0.00 0.00 0.00 00:07:04.556 [2024-11-07T09:35:32.227Z] =================================================================================================================== 00:07:04.556 [2024-11-07T09:35:32.227Z] Total : 22727.67 88.78 0.00 0.00 0.00 0.00 0.00 00:07:04.556 00:07:05.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.489 Nvme0n1 : 4.00 22827.50 89.17 0.00 0.00 0.00 0.00 0.00 00:07:05.489 [2024-11-07T09:35:33.160Z] =================================================================================================================== 00:07:05.489 [2024-11-07T09:35:33.160Z] Total : 22827.50 89.17 0.00 0.00 0.00 0.00 0.00 00:07:05.489 00:07:06.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.423 Nvme0n1 : 5.00 22889.40 89.41 0.00 0.00 0.00 0.00 0.00 00:07:06.423 [2024-11-07T09:35:34.094Z] =================================================================================================================== 00:07:06.423 [2024-11-07T09:35:34.094Z] Total : 22889.40 89.41 0.00 0.00 0.00 0.00 0.00 00:07:06.423 00:07:07.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.356 Nvme0n1 : 6.00 22926.83 89.56 0.00 0.00 0.00 0.00 0.00 00:07:07.356 [2024-11-07T09:35:35.027Z] =================================================================================================================== 00:07:07.356 [2024-11-07T09:35:35.027Z] Total : 22926.83 89.56 0.00 0.00 0.00 0.00 0.00 00:07:07.356 00:07:08.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.289 Nvme0n1 : 7.00 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:07:08.289 [2024-11-07T09:35:35.960Z] =================================================================================================================== 00:07:08.289 [2024-11-07T09:35:35.960Z] Total : 22981.00 89.77 0.00 0.00 0.00 0.00 0.00 00:07:08.289 00:07:09.661 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.661 Nvme0n1 : 8.00 23000.75 89.85 0.00 0.00 0.00 0.00 0.00 00:07:09.661 [2024-11-07T09:35:37.332Z] =================================================================================================================== 00:07:09.661 [2024-11-07T09:35:37.332Z] Total : 23000.75 89.85 0.00 0.00 0.00 0.00 0.00 00:07:09.661 00:07:10.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.594 Nvme0n1 : 9.00 23020.56 89.92 0.00 0.00 0.00 0.00 0.00 00:07:10.594 [2024-11-07T09:35:38.265Z] =================================================================================================================== 00:07:10.594 [2024-11-07T09:35:38.265Z] Total : 23020.56 89.92 0.00 0.00 0.00 0.00 0.00 00:07:10.594 00:07:11.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.528 Nvme0n1 : 10.00 23039.50 90.00 0.00 0.00 0.00 0.00 0.00 00:07:11.528 [2024-11-07T09:35:39.199Z] =================================================================================================================== 00:07:11.528 [2024-11-07T09:35:39.199Z] Total : 23039.50 90.00 0.00 0.00 0.00 0.00 0.00 00:07:11.528 00:07:11.528 00:07:11.528 Latency(us) 00:07:11.528 [2024-11-07T09:35:39.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.528 Nvme0n1 : 10.00 23043.05 90.01 0.00 0.00 5551.99 1417.57 11454.55 00:07:11.528 [2024-11-07T09:35:39.199Z] =================================================================================================================== 00:07:11.528 [2024-11-07T09:35:39.199Z] Total : 23043.05 90.01 0.00 0.00 5551.99 1417.57 11454.55 00:07:11.528 { 00:07:11.528 "results": [ 00:07:11.528 { 00:07:11.528 "job": "Nvme0n1", 00:07:11.528 "core_mask": "0x2", 00:07:11.528 "workload": "randwrite", 00:07:11.528 "status": "finished", 00:07:11.528 "queue_depth": 128, 00:07:11.528 "io_size": 4096, 00:07:11.528 "runtime": 10.004014, 00:07:11.528 "iops": 23043.050519521465, 00:07:11.528 "mibps": 90.01191609188072, 00:07:11.528 "io_failed": 0, 00:07:11.528 "io_timeout": 0, 00:07:11.528 "avg_latency_us": 5551.98552185965, 00:07:11.528 "min_latency_us": 1417.5721739130436, 00:07:11.528 "max_latency_us": 11454.553043478261 00:07:11.528 } 00:07:11.528 ], 00:07:11.528 "core_count": 1 00:07:11.528 } 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2538522 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2538522 ']' 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2538522 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:11.528 10:35:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2538522 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2538522' 00:07:11.528 killing process with pid 2538522 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2538522 00:07:11.528 Received shutdown signal, test time was about 10.000000 seconds 00:07:11.528 00:07:11.528 Latency(us) 00:07:11.528 [2024-11-07T09:35:39.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.528 [2024-11-07T09:35:39.199Z] =================================================================================================================== 00:07:11.528 [2024-11-07T09:35:39.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2538522 00:07:11.528 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:11.786 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:12.044 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:12.044 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:12.302 [2024-11-07 10:35:39.926928] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.302 10:35:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:12.560 request: 00:07:12.560 { 00:07:12.560 "uuid": "9c631117-393a-4547-88a9-3aa855954169", 00:07:12.560 "method": "bdev_lvol_get_lvstores", 00:07:12.560 "req_id": 1 00:07:12.560 } 00:07:12.560 Got JSON-RPC error response 00:07:12.560 response: 00:07:12.560 { 00:07:12.560 "code": -19, 00:07:12.560 "message": "No such device" 00:07:12.560 } 00:07:12.560 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:12.560 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:12.560 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:12.560 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:12.560 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.818 aio_bdev 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:12.818 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:13.075 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 -t 2000 00:07:13.075 [ 00:07:13.075 { 00:07:13.075 "name": "0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0", 00:07:13.075 "aliases": [ 00:07:13.075 "lvs/lvol" 00:07:13.075 ], 00:07:13.075 "product_name": "Logical Volume", 00:07:13.075 "block_size": 4096, 00:07:13.075 "num_blocks": 38912, 00:07:13.075 "uuid": "0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0", 00:07:13.075 "assigned_rate_limits": { 00:07:13.075 "rw_ios_per_sec": 0, 00:07:13.075 "rw_mbytes_per_sec": 0, 00:07:13.075 "r_mbytes_per_sec": 0, 00:07:13.075 "w_mbytes_per_sec": 0 00:07:13.075 }, 00:07:13.075 "claimed": false, 00:07:13.075 "zoned": false, 00:07:13.075 "supported_io_types": { 00:07:13.075 "read": true, 00:07:13.075 "write": true, 00:07:13.075 "unmap": true, 00:07:13.075 "flush": false, 00:07:13.075 "reset": true, 00:07:13.075 "nvme_admin": false, 00:07:13.075 "nvme_io": false, 00:07:13.075 "nvme_io_md": false, 00:07:13.075 "write_zeroes": true, 00:07:13.075 "zcopy": false, 00:07:13.075 "get_zone_info": false, 00:07:13.075 "zone_management": false, 00:07:13.075 "zone_append": false, 00:07:13.075 "compare": false, 00:07:13.075 "compare_and_write": false, 00:07:13.075 "abort": false, 00:07:13.075 "seek_hole": true, 00:07:13.075 "seek_data": true, 00:07:13.075 "copy": false, 00:07:13.075 "nvme_iov_md": false 00:07:13.075 }, 00:07:13.075 "driver_specific": { 00:07:13.075 "lvol": { 00:07:13.075 "lvol_store_uuid": "9c631117-393a-4547-88a9-3aa855954169", 00:07:13.075 "base_bdev": "aio_bdev", 00:07:13.075 "thin_provision": false, 00:07:13.075 "num_allocated_clusters": 38, 00:07:13.075 "snapshot": false, 00:07:13.075 "clone": false, 00:07:13.075 "esnap_clone": false 00:07:13.075 } 00:07:13.075 } 00:07:13.075 } 00:07:13.075 ] 00:07:13.075 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:13.075 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:13.075 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:13.334 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:13.334 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c631117-393a-4547-88a9-3aa855954169 00:07:13.334 10:35:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:13.592 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:13.592 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0fe9112f-2eb6-4e7f-8d3a-ec6c13f0c9b0 00:07:13.849 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c631117-393a-4547-88a9-3aa855954169 00:07:13.849 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:14.108 00:07:14.108 real 0m15.647s 00:07:14.108 user 0m15.210s 00:07:14.108 sys 0m1.456s 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:14.108 ************************************ 00:07:14.108 END TEST lvs_grow_clean 00:07:14.108 ************************************ 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:14.108 ************************************ 00:07:14.108 START TEST lvs_grow_dirty 00:07:14.108 ************************************ 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:14.108 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:14.366 10:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:14.366 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:14.366 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:14.625 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e353f11d-dde8-4649-b841-c104070d1b42 00:07:14.625 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:14.625 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:14.884 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:14.884 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:14.884 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e353f11d-dde8-4649-b841-c104070d1b42 lvol 150 00:07:15.142 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aced7e84-71cd-468d-b26c-aaf039d96899 00:07:15.142 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:15.142 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:15.142 [2024-11-07 10:35:42.740325] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:15.142 [2024-11-07 10:35:42.740370] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:15.142 true 00:07:15.142 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:15.142 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:15.399 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:15.399 10:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.657 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aced7e84-71cd-468d-b26c-aaf039d96899 00:07:15.915 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:15.915 [2024-11-07 10:35:43.522658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.915 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2541129 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2541129 /var/tmp/bdevperf.sock 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2541129 ']' 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:16.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.174 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.174 [2024-11-07 10:35:43.755108] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:16.174 [2024-11-07 10:35:43.755158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541129 ] 00:07:16.174 [2024-11-07 10:35:43.817261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.432 [2024-11-07 10:35:43.860623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.432 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.432 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:16.432 10:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:16.691 Nvme0n1 00:07:16.950 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:16.950 [ 00:07:16.950 { 00:07:16.950 "name": "Nvme0n1", 00:07:16.950 "aliases": [ 00:07:16.950 "aced7e84-71cd-468d-b26c-aaf039d96899" 00:07:16.950 ], 00:07:16.950 "product_name": "NVMe disk", 00:07:16.950 "block_size": 4096, 00:07:16.950 "num_blocks": 38912, 00:07:16.950 "uuid": "aced7e84-71cd-468d-b26c-aaf039d96899", 00:07:16.950 "numa_id": 1, 00:07:16.950 "assigned_rate_limits": { 00:07:16.950 "rw_ios_per_sec": 0, 00:07:16.950 "rw_mbytes_per_sec": 0, 00:07:16.950 "r_mbytes_per_sec": 0, 00:07:16.950 "w_mbytes_per_sec": 0 00:07:16.950 }, 00:07:16.950 "claimed": false, 00:07:16.950 "zoned": false, 00:07:16.950 "supported_io_types": { 00:07:16.950 "read": true, 00:07:16.950 "write": true, 00:07:16.950 "unmap": true, 00:07:16.950 "flush": true, 00:07:16.950 "reset": true, 00:07:16.950 "nvme_admin": true, 00:07:16.950 "nvme_io": true, 00:07:16.950 "nvme_io_md": false, 00:07:16.950 "write_zeroes": true, 00:07:16.950 "zcopy": false, 00:07:16.950 "get_zone_info": false, 00:07:16.950 "zone_management": false, 00:07:16.950 "zone_append": false, 00:07:16.950 "compare": true, 00:07:16.950 "compare_and_write": true, 00:07:16.950 "abort": true, 00:07:16.950 "seek_hole": false, 00:07:16.950 "seek_data": false, 00:07:16.950 "copy": true, 00:07:16.950 "nvme_iov_md": false 00:07:16.950 }, 00:07:16.950 "memory_domains": [ 00:07:16.950 { 00:07:16.950 "dma_device_id": "system", 00:07:16.950 "dma_device_type": 1 00:07:16.950 } 00:07:16.950 ], 00:07:16.950 "driver_specific": { 00:07:16.950 "nvme": [ 00:07:16.950 { 00:07:16.950 "trid": { 00:07:16.950 "trtype": "TCP", 00:07:16.950 "adrfam": "IPv4", 00:07:16.950 "traddr": "10.0.0.2", 00:07:16.950 "trsvcid": "4420", 00:07:16.950 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:16.950 }, 00:07:16.950 "ctrlr_data": { 00:07:16.950 "cntlid": 1, 00:07:16.950 "vendor_id": "0x8086", 00:07:16.950 "model_number": "SPDK bdev Controller", 00:07:16.950 "serial_number": "SPDK0", 00:07:16.950 "firmware_revision": "25.01", 00:07:16.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:16.950 "oacs": { 00:07:16.950 "security": 0, 00:07:16.950 "format": 0, 00:07:16.950 "firmware": 0, 00:07:16.950 "ns_manage": 0 00:07:16.950 }, 00:07:16.950 "multi_ctrlr": true, 00:07:16.950 "ana_reporting": false 00:07:16.950 }, 00:07:16.950 "vs": { 00:07:16.950 "nvme_version": "1.3" 00:07:16.951 }, 00:07:16.951 "ns_data": { 00:07:16.951 "id": 1, 00:07:16.951 "can_share": true 00:07:16.951 } 00:07:16.951 } 00:07:16.951 ], 00:07:16.951 "mp_policy": "active_passive" 00:07:16.951 } 00:07:16.951 } 00:07:16.951 ] 00:07:16.951 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:16.951 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2541353 00:07:16.951 10:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:17.209 Running I/O for 10 seconds... 00:07:18.145 Latency(us) 00:07:18.145 [2024-11-07T09:35:45.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.145 Nvme0n1 : 1.00 22751.00 88.87 0.00 0.00 0.00 0.00 0.00 00:07:18.145 [2024-11-07T09:35:45.816Z] =================================================================================================================== 00:07:18.145 [2024-11-07T09:35:45.816Z] Total : 22751.00 88.87 0.00 0.00 0.00 0.00 0.00 00:07:18.145 00:07:19.083 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:19.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.083 Nvme0n1 : 2.00 22852.50 89.27 0.00 0.00 0.00 0.00 0.00 00:07:19.083 [2024-11-07T09:35:46.754Z] =================================================================================================================== 00:07:19.083 [2024-11-07T09:35:46.754Z] Total : 22852.50 89.27 0.00 0.00 0.00 0.00 0.00 00:07:19.083 00:07:19.342 true 00:07:19.342 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:19.342 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:19.342 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:19.342 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:19.342 10:35:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2541353 00:07:20.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.293 Nvme0n1 : 3.00 22901.00 89.46 0.00 0.00 0.00 0.00 0.00 00:07:20.293 [2024-11-07T09:35:47.964Z] =================================================================================================================== 00:07:20.293 [2024-11-07T09:35:47.964Z] Total : 22901.00 89.46 0.00 0.00 0.00 0.00 0.00 00:07:20.293 00:07:21.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.336 Nvme0n1 : 4.00 22957.25 89.68 0.00 0.00 0.00 0.00 0.00 00:07:21.336 [2024-11-07T09:35:49.007Z] =================================================================================================================== 00:07:21.336 [2024-11-07T09:35:49.007Z] Total : 22957.25 89.68 0.00 0.00 0.00 0.00 0.00 00:07:21.336 00:07:22.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.273 Nvme0n1 : 5.00 23006.40 89.87 0.00 0.00 0.00 0.00 0.00 00:07:22.273 [2024-11-07T09:35:49.944Z] =================================================================================================================== 00:07:22.273 [2024-11-07T09:35:49.944Z] Total : 23006.40 89.87 0.00 0.00 0.00 0.00 0.00 00:07:22.273 00:07:23.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.207 Nvme0n1 : 6.00 22985.83 89.79 0.00 0.00 0.00 0.00 0.00 00:07:23.207 [2024-11-07T09:35:50.878Z] =================================================================================================================== 00:07:23.207 [2024-11-07T09:35:50.878Z] Total : 22985.83 89.79 0.00 0.00 0.00 0.00 0.00 00:07:23.207 00:07:24.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.142 Nvme0n1 : 7.00 23019.86 89.92 0.00 0.00 0.00 0.00 0.00 00:07:24.142 [2024-11-07T09:35:51.813Z] =================================================================================================================== 00:07:24.142 [2024-11-07T09:35:51.813Z] Total : 23019.86 89.92 0.00 0.00 0.00 0.00 0.00 00:07:24.142 00:07:25.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.077 Nvme0n1 : 8.00 23048.62 90.03 0.00 0.00 0.00 0.00 0.00 00:07:25.077 [2024-11-07T09:35:52.748Z] =================================================================================================================== 00:07:25.077 [2024-11-07T09:35:52.748Z] Total : 23048.62 90.03 0.00 0.00 0.00 0.00 0.00 00:07:25.077 00:07:26.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.012 Nvme0n1 : 9.00 23068.56 90.11 0.00 0.00 0.00 0.00 0.00 00:07:26.012 [2024-11-07T09:35:53.683Z] =================================================================================================================== 00:07:26.012 [2024-11-07T09:35:53.683Z] Total : 23068.56 90.11 0.00 0.00 0.00 0.00 0.00 00:07:26.012 00:07:27.387 00:07:27.387 Latency(us) 00:07:27.387 [2024-11-07T09:35:55.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.387 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.387 Nvme0n1 : 10.00 23058.87 90.07 0.00 0.00 5548.23 3305.29 11055.64 00:07:27.387 [2024-11-07T09:35:55.058Z] =================================================================================================================== 00:07:27.387 [2024-11-07T09:35:55.058Z] Total : 23058.87 90.07 0.00 0.00 5548.23 3305.29 11055.64 00:07:27.387 { 00:07:27.387 "results": [ 00:07:27.387 { 00:07:27.387 "job": "Nvme0n1", 00:07:27.387 "core_mask": "0x2", 00:07:27.387 "workload": "randwrite", 00:07:27.387 "status": "finished", 00:07:27.387 "queue_depth": 128, 00:07:27.387 "io_size": 4096, 00:07:27.387 "runtime": 10.001185, 00:07:27.387 "iops": 23058.867524198384, 00:07:27.387 "mibps": 90.07370126639994, 00:07:27.387 "io_failed": 0, 00:07:27.387 "io_timeout": 0, 00:07:27.387 "avg_latency_us": 5548.227097482583, 00:07:27.387 "min_latency_us": 3305.2939130434784, 00:07:27.387 "max_latency_us": 11055.638260869566 00:07:27.387 } 00:07:27.387 ], 00:07:27.387 "core_count": 1 00:07:27.387 } 00:07:27.387 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2541129 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2541129 ']' 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2541129 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2541129 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2541129' 00:07:27.388 killing process with pid 2541129 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2541129 00:07:27.388 Received shutdown signal, test time was about 10.000000 seconds 00:07:27.388 00:07:27.388 Latency(us) 00:07:27.388 [2024-11-07T09:35:55.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.388 [2024-11-07T09:35:55.059Z] =================================================================================================================== 00:07:27.388 [2024-11-07T09:35:55.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2541129 00:07:27.388 10:35:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.646 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.646 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:27.646 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2538026 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2538026 00:07:27.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2538026 Killed "${NVMF_APP[@]}" "$@" 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2543128 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2543128 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2543128 ']' 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:27.905 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.164 [2024-11-07 10:35:55.574659] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:28.164 [2024-11-07 10:35:55.574717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.164 [2024-11-07 10:35:55.641437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.164 [2024-11-07 10:35:55.682041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.164 [2024-11-07 10:35:55.682082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.164 [2024-11-07 10:35:55.682089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.164 [2024-11-07 10:35:55.682095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.164 [2024-11-07 10:35:55.682101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.164 [2024-11-07 10:35:55.682726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.164 10:35:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.422 [2024-11-07 10:35:55.988473] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:28.422 [2024-11-07 10:35:55.988556] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:28.422 [2024-11-07 10:35:55.988584] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:28.422 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:28.422 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aced7e84-71cd-468d-b26c-aaf039d96899 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=aced7e84-71cd-468d-b26c-aaf039d96899 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:28.423 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:28.681 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aced7e84-71cd-468d-b26c-aaf039d96899 -t 2000 00:07:28.940 [ 00:07:28.940 { 00:07:28.940 "name": "aced7e84-71cd-468d-b26c-aaf039d96899", 00:07:28.940 "aliases": [ 00:07:28.940 "lvs/lvol" 00:07:28.940 ], 00:07:28.940 "product_name": "Logical Volume", 00:07:28.940 "block_size": 4096, 00:07:28.940 "num_blocks": 38912, 00:07:28.940 "uuid": "aced7e84-71cd-468d-b26c-aaf039d96899", 00:07:28.940 "assigned_rate_limits": { 00:07:28.940 "rw_ios_per_sec": 0, 00:07:28.940 "rw_mbytes_per_sec": 0, 00:07:28.940 "r_mbytes_per_sec": 0, 00:07:28.940 "w_mbytes_per_sec": 0 00:07:28.940 }, 00:07:28.940 "claimed": false, 00:07:28.940 "zoned": false, 00:07:28.940 "supported_io_types": { 00:07:28.940 "read": true, 00:07:28.940 "write": true, 00:07:28.940 "unmap": true, 00:07:28.940 "flush": false, 00:07:28.940 "reset": true, 00:07:28.940 "nvme_admin": false, 00:07:28.940 "nvme_io": false, 00:07:28.940 "nvme_io_md": false, 00:07:28.940 "write_zeroes": true, 00:07:28.940 "zcopy": false, 00:07:28.940 "get_zone_info": false, 00:07:28.940 "zone_management": false, 00:07:28.940 "zone_append": false, 00:07:28.940 "compare": false, 00:07:28.940 "compare_and_write": false, 00:07:28.940 "abort": false, 00:07:28.940 "seek_hole": true, 00:07:28.940 "seek_data": true, 00:07:28.940 "copy": false, 00:07:28.940 "nvme_iov_md": false 00:07:28.940 }, 00:07:28.940 "driver_specific": { 00:07:28.940 "lvol": { 00:07:28.940 "lvol_store_uuid": "e353f11d-dde8-4649-b841-c104070d1b42", 00:07:28.940 "base_bdev": "aio_bdev", 00:07:28.940 "thin_provision": false, 00:07:28.940 "num_allocated_clusters": 38, 00:07:28.940 "snapshot": false, 00:07:28.940 "clone": false, 00:07:28.940 "esnap_clone": false 00:07:28.940 } 00:07:28.940 } 00:07:28.940 } 00:07:28.940 ] 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:28.940 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:29.199 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:29.199 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.456 [2024-11-07 10:35:56.945210] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.456 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:29.457 10:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:29.714 request: 00:07:29.714 { 00:07:29.714 "uuid": "e353f11d-dde8-4649-b841-c104070d1b42", 00:07:29.714 "method": "bdev_lvol_get_lvstores", 00:07:29.714 "req_id": 1 00:07:29.714 } 00:07:29.714 Got JSON-RPC error response 00:07:29.714 response: 00:07:29.714 { 00:07:29.714 "code": -19, 00:07:29.714 "message": "No such device" 00:07:29.714 } 00:07:29.714 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.715 aio_bdev 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aced7e84-71cd-468d-b26c-aaf039d96899 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=aced7e84-71cd-468d-b26c-aaf039d96899 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:29.715 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:29.973 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aced7e84-71cd-468d-b26c-aaf039d96899 -t 2000 00:07:30.231 [ 00:07:30.231 { 00:07:30.231 "name": "aced7e84-71cd-468d-b26c-aaf039d96899", 00:07:30.231 "aliases": [ 00:07:30.231 "lvs/lvol" 00:07:30.231 ], 00:07:30.231 "product_name": "Logical Volume", 00:07:30.231 "block_size": 4096, 00:07:30.231 "num_blocks": 38912, 00:07:30.231 "uuid": "aced7e84-71cd-468d-b26c-aaf039d96899", 00:07:30.231 "assigned_rate_limits": { 00:07:30.231 "rw_ios_per_sec": 0, 00:07:30.231 "rw_mbytes_per_sec": 0, 00:07:30.231 "r_mbytes_per_sec": 0, 00:07:30.231 "w_mbytes_per_sec": 0 00:07:30.231 }, 00:07:30.231 "claimed": false, 00:07:30.231 "zoned": false, 00:07:30.231 "supported_io_types": { 00:07:30.231 "read": true, 00:07:30.231 "write": true, 00:07:30.231 "unmap": true, 00:07:30.231 "flush": false, 00:07:30.231 "reset": true, 00:07:30.231 "nvme_admin": false, 00:07:30.231 "nvme_io": false, 00:07:30.231 "nvme_io_md": false, 00:07:30.231 "write_zeroes": true, 00:07:30.231 "zcopy": false, 00:07:30.231 "get_zone_info": false, 00:07:30.231 "zone_management": false, 00:07:30.231 "zone_append": false, 00:07:30.231 "compare": false, 00:07:30.231 "compare_and_write": false, 00:07:30.231 "abort": false, 00:07:30.231 "seek_hole": true, 00:07:30.231 "seek_data": true, 00:07:30.231 "copy": false, 00:07:30.231 "nvme_iov_md": false 00:07:30.231 }, 00:07:30.231 "driver_specific": { 00:07:30.231 "lvol": { 00:07:30.231 "lvol_store_uuid": "e353f11d-dde8-4649-b841-c104070d1b42", 00:07:30.231 "base_bdev": "aio_bdev", 00:07:30.231 "thin_provision": false, 00:07:30.231 "num_allocated_clusters": 38, 00:07:30.231 "snapshot": false, 00:07:30.231 "clone": false, 00:07:30.231 "esnap_clone": false 00:07:30.231 } 00:07:30.231 } 00:07:30.231 } 00:07:30.231 ] 00:07:30.231 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:30.231 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:30.231 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:30.489 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:30.489 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:30.489 10:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:30.747 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:30.747 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aced7e84-71cd-468d-b26c-aaf039d96899 00:07:30.747 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e353f11d-dde8-4649-b841-c104070d1b42 00:07:31.005 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.263 00:07:31.263 real 0m17.018s 00:07:31.263 user 0m43.813s 00:07:31.263 sys 0m3.821s 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:31.263 ************************************ 00:07:31.263 END TEST lvs_grow_dirty 00:07:31.263 ************************************ 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:31.263 nvmf_trace.0 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:31.263 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:31.264 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:31.264 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:31.264 rmmod nvme_tcp 00:07:31.264 rmmod nvme_fabrics 00:07:31.264 rmmod nvme_keyring 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2543128 ']' 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2543128 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2543128 ']' 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2543128 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:31.522 10:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2543128 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2543128' 00:07:31.523 killing process with pid 2543128 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2543128 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2543128 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.523 10:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.055 00:07:34.055 real 0m41.256s 00:07:34.055 user 1m4.455s 00:07:34.055 sys 0m9.743s 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 ************************************ 00:07:34.055 END TEST nvmf_lvs_grow 00:07:34.055 ************************************ 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.055 ************************************ 00:07:34.055 START TEST nvmf_bdev_io_wait 00:07:34.055 ************************************ 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:34.055 * Looking for test storage... 00:07:34.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.055 --rc genhtml_branch_coverage=1 00:07:34.055 --rc genhtml_function_coverage=1 00:07:34.055 --rc genhtml_legend=1 00:07:34.055 --rc geninfo_all_blocks=1 00:07:34.055 --rc geninfo_unexecuted_blocks=1 00:07:34.055 00:07:34.055 ' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.055 --rc genhtml_branch_coverage=1 00:07:34.055 --rc genhtml_function_coverage=1 00:07:34.055 --rc genhtml_legend=1 00:07:34.055 --rc geninfo_all_blocks=1 00:07:34.055 --rc geninfo_unexecuted_blocks=1 00:07:34.055 00:07:34.055 ' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.055 --rc genhtml_branch_coverage=1 00:07:34.055 --rc genhtml_function_coverage=1 00:07:34.055 --rc genhtml_legend=1 00:07:34.055 --rc geninfo_all_blocks=1 00:07:34.055 --rc geninfo_unexecuted_blocks=1 00:07:34.055 00:07:34.055 ' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.055 --rc genhtml_branch_coverage=1 00:07:34.055 --rc genhtml_function_coverage=1 00:07:34.055 --rc genhtml_legend=1 00:07:34.055 --rc geninfo_all_blocks=1 00:07:34.055 --rc geninfo_unexecuted_blocks=1 00:07:34.055 00:07:34.055 ' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.055 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.056 10:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.323 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.324 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.324 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.324 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.324 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:39.324 10:36:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:39.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.451 ms 00:07:39.583 00:07:39.583 --- 10.0.0.2 ping statistics --- 00:07:39.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.583 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:39.583 00:07:39.583 --- 10.0.0.1 ping statistics --- 00:07:39.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.583 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2547401 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2547401 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2547401 ']' 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.583 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.583 [2024-11-07 10:36:07.170769] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:39.583 [2024-11-07 10:36:07.170818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.583 [2024-11-07 10:36:07.238836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.841 [2024-11-07 10:36:07.283673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.841 [2024-11-07 10:36:07.283708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.841 [2024-11-07 10:36:07.283715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:39.841 [2024-11-07 10:36:07.283721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:39.841 [2024-11-07 10:36:07.283729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.841 [2024-11-07 10:36:07.285214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.841 [2024-11-07 10:36:07.285312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.841 [2024-11-07 10:36:07.285409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.841 [2024-11-07 10:36:07.285411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.841 [2024-11-07 10:36:07.433712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.841 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.842 Malloc0 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:39.842 [2024-11-07 10:36:07.489339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2547433 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2547435 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.842 { 00:07:39.842 "params": { 00:07:39.842 "name": "Nvme$subsystem", 00:07:39.842 "trtype": "$TEST_TRANSPORT", 00:07:39.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.842 "adrfam": "ipv4", 00:07:39.842 "trsvcid": "$NVMF_PORT", 00:07:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.842 "hdgst": ${hdgst:-false}, 00:07:39.842 "ddgst": ${ddgst:-false} 00:07:39.842 }, 00:07:39.842 "method": "bdev_nvme_attach_controller" 00:07:39.842 } 00:07:39.842 EOF 00:07:39.842 )") 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2547437 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.842 { 00:07:39.842 "params": { 00:07:39.842 "name": "Nvme$subsystem", 00:07:39.842 "trtype": "$TEST_TRANSPORT", 00:07:39.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.842 "adrfam": "ipv4", 00:07:39.842 "trsvcid": "$NVMF_PORT", 00:07:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.842 "hdgst": ${hdgst:-false}, 00:07:39.842 "ddgst": ${ddgst:-false} 00:07:39.842 }, 00:07:39.842 "method": "bdev_nvme_attach_controller" 00:07:39.842 } 00:07:39.842 EOF 00:07:39.842 )") 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2547440 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.842 { 00:07:39.842 "params": { 00:07:39.842 "name": "Nvme$subsystem", 00:07:39.842 "trtype": "$TEST_TRANSPORT", 00:07:39.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.842 "adrfam": "ipv4", 00:07:39.842 "trsvcid": "$NVMF_PORT", 00:07:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.842 "hdgst": ${hdgst:-false}, 00:07:39.842 "ddgst": ${ddgst:-false} 00:07:39.842 }, 00:07:39.842 "method": "bdev_nvme_attach_controller" 00:07:39.842 } 00:07:39.842 EOF 00:07:39.842 )") 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:39.842 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:39.842 { 00:07:39.842 "params": { 00:07:39.842 "name": "Nvme$subsystem", 00:07:39.842 "trtype": "$TEST_TRANSPORT", 00:07:39.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:39.842 "adrfam": "ipv4", 00:07:39.842 "trsvcid": "$NVMF_PORT", 00:07:39.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:39.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:39.842 "hdgst": ${hdgst:-false}, 00:07:39.842 "ddgst": ${ddgst:-false} 00:07:39.842 }, 00:07:39.843 "method": "bdev_nvme_attach_controller" 00:07:39.843 } 00:07:39.843 EOF 00:07:39.843 )") 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2547433 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:39.843 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:39.843 "params": { 00:07:39.843 "name": "Nvme1", 00:07:39.843 "trtype": "tcp", 00:07:39.843 "traddr": "10.0.0.2", 00:07:39.843 "adrfam": "ipv4", 00:07:39.843 "trsvcid": "4420", 00:07:39.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:39.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:39.843 "hdgst": false, 00:07:39.843 "ddgst": false 00:07:39.843 }, 00:07:39.843 "method": "bdev_nvme_attach_controller" 00:07:39.843 }' 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:40.100 "params": { 00:07:40.100 "name": "Nvme1", 00:07:40.100 "trtype": "tcp", 00:07:40.100 "traddr": "10.0.0.2", 00:07:40.100 "adrfam": "ipv4", 00:07:40.100 "trsvcid": "4420", 00:07:40.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:40.100 "hdgst": false, 00:07:40.100 "ddgst": false 00:07:40.100 }, 00:07:40.100 "method": "bdev_nvme_attach_controller" 00:07:40.100 }' 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:40.100 "params": { 00:07:40.100 "name": "Nvme1", 00:07:40.100 "trtype": "tcp", 00:07:40.100 "traddr": "10.0.0.2", 00:07:40.100 "adrfam": "ipv4", 00:07:40.100 "trsvcid": "4420", 00:07:40.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:40.100 "hdgst": false, 00:07:40.100 "ddgst": false 00:07:40.100 }, 00:07:40.100 "method": "bdev_nvme_attach_controller" 00:07:40.100 }' 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:40.100 10:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:40.100 "params": { 00:07:40.100 "name": "Nvme1", 00:07:40.100 "trtype": "tcp", 00:07:40.100 "traddr": "10.0.0.2", 00:07:40.100 "adrfam": "ipv4", 00:07:40.100 "trsvcid": "4420", 00:07:40.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:40.100 "hdgst": false, 00:07:40.100 "ddgst": false 00:07:40.100 }, 00:07:40.100 "method": "bdev_nvme_attach_controller" 00:07:40.100 }' 00:07:40.100 [2024-11-07 10:36:07.539347] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:40.100 [2024-11-07 10:36:07.539398] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:40.100 [2024-11-07 10:36:07.542283] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:40.100 [2024-11-07 10:36:07.542329] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:40.100 [2024-11-07 10:36:07.543417] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:40.100 [2024-11-07 10:36:07.543421] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:40.100 [2024-11-07 10:36:07.543467] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-07 10:36:07.543468] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:40.100 --proc-type=auto ] 00:07:40.100 [2024-11-07 10:36:07.724170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.101 [2024-11-07 10:36:07.767173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:40.358 [2024-11-07 10:36:07.817398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.358 [2024-11-07 10:36:07.860676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:40.358 [2024-11-07 10:36:07.918236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.358 [2024-11-07 10:36:07.969693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:40.358 [2024-11-07 10:36:07.978246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.358 [2024-11-07 10:36:08.021168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:40.615 Running I/O for 1 seconds... 00:07:40.615 Running I/O for 1 seconds... 00:07:40.615 Running I/O for 1 seconds... 00:07:40.615 Running I/O for 1 seconds... 00:07:41.548 7697.00 IOPS, 30.07 MiB/s [2024-11-07T09:36:09.219Z] 245408.00 IOPS, 958.62 MiB/s 00:07:41.548 Latency(us) 00:07:41.548 [2024-11-07T09:36:09.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.548 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:41.548 Nvme1n1 : 1.00 245029.59 957.15 0.00 0.00 520.30 229.73 1531.55 00:07:41.548 [2024-11-07T09:36:09.219Z] =================================================================================================================== 00:07:41.548 [2024-11-07T09:36:09.219Z] Total : 245029.59 957.15 0.00 0.00 520.30 229.73 1531.55 00:07:41.548 00:07:41.548 Latency(us) 00:07:41.548 [2024-11-07T09:36:09.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.548 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:41.548 Nvme1n1 : 1.02 7701.03 30.08 0.00 0.00 16499.62 6553.60 28038.01 00:07:41.548 [2024-11-07T09:36:09.219Z] =================================================================================================================== 00:07:41.548 [2024-11-07T09:36:09.219Z] Total : 7701.03 30.08 0.00 0.00 16499.62 6553.60 28038.01 00:07:41.806 7223.00 IOPS, 28.21 MiB/s 00:07:41.806 Latency(us) 00:07:41.806 [2024-11-07T09:36:09.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.806 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:41.806 Nvme1n1 : 1.01 7325.57 28.62 0.00 0.00 17424.39 4188.61 34420.65 00:07:41.806 [2024-11-07T09:36:09.477Z] =================================================================================================================== 00:07:41.806 [2024-11-07T09:36:09.477Z] Total : 7325.57 28.62 0.00 0.00 17424.39 4188.61 34420.65 00:07:41.806 12252.00 IOPS, 47.86 MiB/s 00:07:41.806 Latency(us) 00:07:41.806 [2024-11-07T09:36:09.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.806 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:41.806 Nvme1n1 : 1.01 12335.61 48.19 0.00 0.00 10347.95 3960.65 19831.76 00:07:41.806 [2024-11-07T09:36:09.477Z] =================================================================================================================== 00:07:41.806 [2024-11-07T09:36:09.477Z] Total : 12335.61 48.19 0.00 0.00 10347.95 3960.65 19831.76 00:07:41.806 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2547435 00:07:41.806 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2547437 00:07:41.806 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2547440 00:07:41.806 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.807 rmmod nvme_tcp 00:07:41.807 rmmod nvme_fabrics 00:07:41.807 rmmod nvme_keyring 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2547401 ']' 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2547401 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2547401 ']' 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2547401 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.807 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2547401 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2547401' 00:07:42.065 killing process with pid 2547401 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2547401 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2547401 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.065 10:36:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.597 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:44.597 00:07:44.598 real 0m10.416s 00:07:44.598 user 0m16.238s 00:07:44.598 sys 0m5.861s 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:44.598 ************************************ 00:07:44.598 END TEST nvmf_bdev_io_wait 00:07:44.598 ************************************ 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.598 ************************************ 00:07:44.598 START TEST nvmf_queue_depth 00:07:44.598 ************************************ 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:44.598 * Looking for test storage... 00:07:44.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:44.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.598 --rc genhtml_branch_coverage=1 00:07:44.598 --rc genhtml_function_coverage=1 00:07:44.598 --rc genhtml_legend=1 00:07:44.598 --rc geninfo_all_blocks=1 00:07:44.598 --rc geninfo_unexecuted_blocks=1 00:07:44.598 00:07:44.598 ' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:44.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.598 --rc genhtml_branch_coverage=1 00:07:44.598 --rc genhtml_function_coverage=1 00:07:44.598 --rc genhtml_legend=1 00:07:44.598 --rc geninfo_all_blocks=1 00:07:44.598 --rc geninfo_unexecuted_blocks=1 00:07:44.598 00:07:44.598 ' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:44.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.598 --rc genhtml_branch_coverage=1 00:07:44.598 --rc genhtml_function_coverage=1 00:07:44.598 --rc genhtml_legend=1 00:07:44.598 --rc geninfo_all_blocks=1 00:07:44.598 --rc geninfo_unexecuted_blocks=1 00:07:44.598 00:07:44.598 ' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:44.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.598 --rc genhtml_branch_coverage=1 00:07:44.598 --rc genhtml_function_coverage=1 00:07:44.598 --rc genhtml_legend=1 00:07:44.598 --rc geninfo_all_blocks=1 00:07:44.598 --rc geninfo_unexecuted_blocks=1 00:07:44.598 00:07:44.598 ' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.598 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.599 10:36:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.599 10:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.866 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:49.867 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:49.867 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:49.867 Found net devices under 0000:86:00.0: cvl_0_0 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:49.867 Found net devices under 0000:86:00.1: cvl_0_1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:07:49.867 00:07:49.867 --- 10.0.0.2 ping statistics --- 00:07:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.867 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:07:49.867 00:07:49.867 --- 10.0.0.1 ping statistics --- 00:07:49.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.867 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.867 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2551683 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2551683 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2551683 ']' 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.126 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.126 [2024-11-07 10:36:17.613507] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:50.126 [2024-11-07 10:36:17.613555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.126 [2024-11-07 10:36:17.684326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.126 [2024-11-07 10:36:17.726479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.126 [2024-11-07 10:36:17.726518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.126 [2024-11-07 10:36:17.726525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.126 [2024-11-07 10:36:17.726532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.126 [2024-11-07 10:36:17.726538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.126 [2024-11-07 10:36:17.727113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.385 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.385 [2024-11-07 10:36:17.862189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 Malloc0 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 [2024-11-07 10:36:17.904546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2551851 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2551851 /var/tmp/bdevperf.sock 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2551851 ']' 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.386 10:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.386 [2024-11-07 10:36:17.953177] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:07:50.386 [2024-11-07 10:36:17.953220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551851 ] 00:07:50.386 [2024-11-07 10:36:18.015470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.645 [2024-11-07 10:36:18.059257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:50.645 NVMe0n1 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.645 10:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:50.645 Running I/O for 10 seconds... 00:07:52.957 11430.00 IOPS, 44.65 MiB/s [2024-11-07T09:36:21.562Z] 11766.00 IOPS, 45.96 MiB/s [2024-11-07T09:36:22.499Z] 11937.33 IOPS, 46.63 MiB/s [2024-11-07T09:36:23.433Z] 12036.00 IOPS, 47.02 MiB/s [2024-11-07T09:36:24.368Z] 12094.40 IOPS, 47.24 MiB/s [2024-11-07T09:36:25.743Z] 12139.50 IOPS, 47.42 MiB/s [2024-11-07T09:36:26.678Z] 12153.29 IOPS, 47.47 MiB/s [2024-11-07T09:36:27.612Z] 12152.25 IOPS, 47.47 MiB/s [2024-11-07T09:36:28.548Z] 12162.78 IOPS, 47.51 MiB/s [2024-11-07T09:36:28.548Z] 12172.10 IOPS, 47.55 MiB/s 00:08:00.877 Latency(us) 00:08:00.877 [2024-11-07T09:36:28.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.877 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:00.877 Verification LBA range: start 0x0 length 0x4000 00:08:00.877 NVMe0n1 : 10.05 12209.36 47.69 0.00 0.00 83609.12 17552.25 55848.07 00:08:00.877 [2024-11-07T09:36:28.548Z] =================================================================================================================== 00:08:00.877 [2024-11-07T09:36:28.548Z] Total : 12209.36 47.69 0.00 0.00 83609.12 17552.25 55848.07 00:08:00.877 { 00:08:00.877 "results": [ 00:08:00.877 { 00:08:00.877 "job": "NVMe0n1", 00:08:00.877 "core_mask": "0x1", 00:08:00.877 "workload": "verify", 00:08:00.877 "status": "finished", 00:08:00.877 "verify_range": { 00:08:00.877 "start": 0, 00:08:00.877 "length": 16384 00:08:00.877 }, 00:08:00.877 "queue_depth": 1024, 00:08:00.877 "io_size": 4096, 00:08:00.877 "runtime": 10.052858, 00:08:00.877 "iops": 12209.363745116065, 00:08:00.877 "mibps": 47.69282712935963, 00:08:00.877 "io_failed": 0, 00:08:00.877 "io_timeout": 0, 00:08:00.877 "avg_latency_us": 83609.11570153279, 00:08:00.877 "min_latency_us": 17552.250434782607, 00:08:00.877 "max_latency_us": 55848.06956521739 00:08:00.877 } 00:08:00.877 ], 00:08:00.877 "core_count": 1 00:08:00.877 } 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2551851 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2551851 ']' 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2551851 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2551851 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2551851' 00:08:00.877 killing process with pid 2551851 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2551851 00:08:00.877 Received shutdown signal, test time was about 10.000000 seconds 00:08:00.877 00:08:00.877 Latency(us) 00:08:00.877 [2024-11-07T09:36:28.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.877 [2024-11-07T09:36:28.548Z] =================================================================================================================== 00:08:00.877 [2024-11-07T09:36:28.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:00.877 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2551851 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.135 rmmod nvme_tcp 00:08:01.135 rmmod nvme_fabrics 00:08:01.135 rmmod nvme_keyring 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2551683 ']' 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2551683 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2551683 ']' 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2551683 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2551683 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2551683' 00:08:01.135 killing process with pid 2551683 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2551683 00:08:01.135 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2551683 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.393 10:36:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.925 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:03.925 00:08:03.925 real 0m19.194s 00:08:03.925 user 0m22.781s 00:08:03.925 sys 0m5.714s 00:08:03.925 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.925 10:36:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.925 ************************************ 00:08:03.925 END TEST nvmf_queue_depth 00:08:03.925 ************************************ 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.925 ************************************ 00:08:03.925 START TEST nvmf_target_multipath 00:08:03.925 ************************************ 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:03.925 * Looking for test storage... 00:08:03.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:03.925 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:03.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.926 --rc genhtml_branch_coverage=1 00:08:03.926 --rc genhtml_function_coverage=1 00:08:03.926 --rc genhtml_legend=1 00:08:03.926 --rc geninfo_all_blocks=1 00:08:03.926 --rc geninfo_unexecuted_blocks=1 00:08:03.926 00:08:03.926 ' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:03.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.926 --rc genhtml_branch_coverage=1 00:08:03.926 --rc genhtml_function_coverage=1 00:08:03.926 --rc genhtml_legend=1 00:08:03.926 --rc geninfo_all_blocks=1 00:08:03.926 --rc geninfo_unexecuted_blocks=1 00:08:03.926 00:08:03.926 ' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:03.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.926 --rc genhtml_branch_coverage=1 00:08:03.926 --rc genhtml_function_coverage=1 00:08:03.926 --rc genhtml_legend=1 00:08:03.926 --rc geninfo_all_blocks=1 00:08:03.926 --rc geninfo_unexecuted_blocks=1 00:08:03.926 00:08:03.926 ' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:03.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.926 --rc genhtml_branch_coverage=1 00:08:03.926 --rc genhtml_function_coverage=1 00:08:03.926 --rc genhtml_legend=1 00:08:03.926 --rc geninfo_all_blocks=1 00:08:03.926 --rc geninfo_unexecuted_blocks=1 00:08:03.926 00:08:03.926 ' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:03.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:03.926 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:03.927 10:36:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.194 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:09.195 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:09.195 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:09.195 Found net devices under 0000:86:00.0: cvl_0_0 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:09.195 Found net devices under 0000:86:00.1: cvl_0_1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:08:09.195 00:08:09.195 --- 10.0.0.2 ping statistics --- 00:08:09.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.195 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:08:09.195 00:08:09.195 --- 10.0.0.1 ping statistics --- 00:08:09.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.195 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.195 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:09.196 only one NIC for nvmf test 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.196 rmmod nvme_tcp 00:08:09.196 rmmod nvme_fabrics 00:08:09.196 rmmod nvme_keyring 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.196 10:36:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.100 00:08:11.100 real 0m7.704s 00:08:11.100 user 0m1.683s 00:08:11.100 sys 0m4.037s 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:11.100 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:11.100 ************************************ 00:08:11.100 END TEST nvmf_target_multipath 00:08:11.100 ************************************ 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.360 ************************************ 00:08:11.360 START TEST nvmf_zcopy 00:08:11.360 ************************************ 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:11.360 * Looking for test storage... 00:08:11.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:11.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.360 --rc genhtml_branch_coverage=1 00:08:11.360 --rc genhtml_function_coverage=1 00:08:11.360 --rc genhtml_legend=1 00:08:11.360 --rc geninfo_all_blocks=1 00:08:11.360 --rc geninfo_unexecuted_blocks=1 00:08:11.360 00:08:11.360 ' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:11.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.360 --rc genhtml_branch_coverage=1 00:08:11.360 --rc genhtml_function_coverage=1 00:08:11.360 --rc genhtml_legend=1 00:08:11.360 --rc geninfo_all_blocks=1 00:08:11.360 --rc geninfo_unexecuted_blocks=1 00:08:11.360 00:08:11.360 ' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:11.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.360 --rc genhtml_branch_coverage=1 00:08:11.360 --rc genhtml_function_coverage=1 00:08:11.360 --rc genhtml_legend=1 00:08:11.360 --rc geninfo_all_blocks=1 00:08:11.360 --rc geninfo_unexecuted_blocks=1 00:08:11.360 00:08:11.360 ' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:11.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.360 --rc genhtml_branch_coverage=1 00:08:11.360 --rc genhtml_function_coverage=1 00:08:11.360 --rc genhtml_legend=1 00:08:11.360 --rc geninfo_all_blocks=1 00:08:11.360 --rc geninfo_unexecuted_blocks=1 00:08:11.360 00:08:11.360 ' 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.360 10:36:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:11.360 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.361 10:36:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:16.732 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:16.732 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:16.732 Found net devices under 0000:86:00.0: cvl_0_0 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:16.732 Found net devices under 0000:86:00.1: cvl_0_1 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.732 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.733 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:08:16.991 00:08:16.991 --- 10.0.0.2 ping statistics --- 00:08:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.991 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:08:16.991 00:08:16.991 --- 10.0.0.1 ping statistics --- 00:08:16.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.991 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2560532 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2560532 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2560532 ']' 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:16.991 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.250 [2024-11-07 10:36:44.699584] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:17.250 [2024-11-07 10:36:44.699636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.250 [2024-11-07 10:36:44.767251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.250 [2024-11-07 10:36:44.808216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.250 [2024-11-07 10:36:44.808250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.250 [2024-11-07 10:36:44.808257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.250 [2024-11-07 10:36:44.808263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.250 [2024-11-07 10:36:44.808268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.250 [2024-11-07 10:36:44.808882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.250 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:17.250 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:17.250 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.250 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.250 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 [2024-11-07 10:36:44.940140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 [2024-11-07 10:36:44.964359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 malloc0 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.509 10:36:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.509 { 00:08:17.509 "params": { 00:08:17.509 "name": "Nvme$subsystem", 00:08:17.509 "trtype": "$TEST_TRANSPORT", 00:08:17.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.509 "adrfam": "ipv4", 00:08:17.509 "trsvcid": "$NVMF_PORT", 00:08:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.509 "hdgst": ${hdgst:-false}, 00:08:17.509 "ddgst": ${ddgst:-false} 00:08:17.509 }, 00:08:17.509 "method": "bdev_nvme_attach_controller" 00:08:17.509 } 00:08:17.509 EOF 00:08:17.509 )") 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:17.509 10:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.509 "params": { 00:08:17.509 "name": "Nvme1", 00:08:17.509 "trtype": "tcp", 00:08:17.509 "traddr": "10.0.0.2", 00:08:17.509 "adrfam": "ipv4", 00:08:17.509 "trsvcid": "4420", 00:08:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.509 "hdgst": false, 00:08:17.509 "ddgst": false 00:08:17.509 }, 00:08:17.509 "method": "bdev_nvme_attach_controller" 00:08:17.509 }' 00:08:17.509 [2024-11-07 10:36:45.030793] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:17.509 [2024-11-07 10:36:45.030839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560554 ] 00:08:17.509 [2024-11-07 10:36:45.094499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.509 [2024-11-07 10:36:45.135882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.768 Running I/O for 10 seconds... 00:08:20.078 8376.00 IOPS, 65.44 MiB/s [2024-11-07T09:36:48.684Z] 8458.50 IOPS, 66.08 MiB/s [2024-11-07T09:36:49.620Z] 8492.00 IOPS, 66.34 MiB/s [2024-11-07T09:36:50.555Z] 8511.50 IOPS, 66.50 MiB/s [2024-11-07T09:36:51.490Z] 8483.80 IOPS, 66.28 MiB/s [2024-11-07T09:36:52.865Z] 8494.83 IOPS, 66.37 MiB/s [2024-11-07T09:36:53.801Z] 8502.71 IOPS, 66.43 MiB/s [2024-11-07T09:36:54.737Z] 8514.75 IOPS, 66.52 MiB/s [2024-11-07T09:36:55.672Z] 8523.33 IOPS, 66.59 MiB/s [2024-11-07T09:36:55.672Z] 8527.90 IOPS, 66.62 MiB/s 00:08:28.001 Latency(us) 00:08:28.001 [2024-11-07T09:36:55.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:28.001 Verification LBA range: start 0x0 length 0x1000 00:08:28.001 Nvme1n1 : 10.01 8531.56 66.65 0.00 0.00 14960.86 2664.18 22909.11 00:08:28.001 [2024-11-07T09:36:55.672Z] =================================================================================================================== 00:08:28.001 [2024-11-07T09:36:55.672Z] Total : 8531.56 66.65 0.00 0.00 14960.86 2664.18 22909.11 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2562391 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.001 { 00:08:28.001 "params": { 00:08:28.001 "name": "Nvme$subsystem", 00:08:28.001 "trtype": "$TEST_TRANSPORT", 00:08:28.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.001 "adrfam": "ipv4", 00:08:28.001 "trsvcid": "$NVMF_PORT", 00:08:28.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.001 "hdgst": ${hdgst:-false}, 00:08:28.001 "ddgst": ${ddgst:-false} 00:08:28.001 }, 00:08:28.001 "method": "bdev_nvme_attach_controller" 00:08:28.001 } 00:08:28.001 EOF 00:08:28.001 )") 00:08:28.001 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:28.001 [2024-11-07 10:36:55.613382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.001 [2024-11-07 10:36:55.613415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.002 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:28.002 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:28.002 10:36:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.002 "params": { 00:08:28.002 "name": "Nvme1", 00:08:28.002 "trtype": "tcp", 00:08:28.002 "traddr": "10.0.0.2", 00:08:28.002 "adrfam": "ipv4", 00:08:28.002 "trsvcid": "4420", 00:08:28.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.002 "hdgst": false, 00:08:28.002 "ddgst": false 00:08:28.002 }, 00:08:28.002 "method": "bdev_nvme_attach_controller" 00:08:28.002 }' 00:08:28.002 [2024-11-07 10:36:55.625386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.002 [2024-11-07 10:36:55.625399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.002 [2024-11-07 10:36:55.637265] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:28.002 [2024-11-07 10:36:55.637309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562391 ] 00:08:28.002 [2024-11-07 10:36:55.637416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.002 [2024-11-07 10:36:55.637426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.002 [2024-11-07 10:36:55.649452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.002 [2024-11-07 10:36:55.649462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.002 [2024-11-07 10:36:55.661487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.002 [2024-11-07 10:36:55.661496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.673514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.673523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.685562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.685571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.695777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.261 [2024-11-07 10:36:55.697593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.697603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.709626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.709642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.721656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.721665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.733693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.733707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.738239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.261 [2024-11-07 10:36:55.745719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.745729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.757765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.757784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.769788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.769803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.781816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.781828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.793850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.793862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.805881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.805895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.817915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.817924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.829961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.829980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.841989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.842008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.854020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.854035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.866053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.866067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.878086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.878095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.890118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.890127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.902154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.902167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.914185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.914198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.261 [2024-11-07 10:36:55.926217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.261 [2024-11-07 10:36:55.926227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.938249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.938259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.950287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.950300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.962317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.962328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.974348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.974357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.986386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.986396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:55.998420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:55.998430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.010468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.010485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 Running I/O for 5 seconds... 00:08:28.520 [2024-11-07 10:36:56.025977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.025997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.040580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.040599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.054533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.054552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.068941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.068960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.080497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.080519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.095174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.095192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.105846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.105864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.120324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.120343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.134448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.134466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.148693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.148711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.163029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.163047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.520 [2024-11-07 10:36:56.177298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.520 [2024-11-07 10:36:56.177317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.191751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.191769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.202651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.202669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.217037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.217063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.230833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.230851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.244920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.244938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.258600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.258619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.272360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.272378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.286467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.286486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.297264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.297282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.312114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.312133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.326088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.326106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.340175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.340199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.354275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.354293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.368567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.368586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.382895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.382914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.393396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.393415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.407939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.407959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.421573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.421594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.779 [2024-11-07 10:36:56.435892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.779 [2024-11-07 10:36:56.435912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.450105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.450124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.461053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.461072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.475407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.475426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.489742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.489761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.503524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.503543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.517691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.517710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.531441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.531460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.545937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.545956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.556942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.556961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.571185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.571204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.584925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.584944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.599197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.599222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.613654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.613673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.624772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.624791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.638949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.638967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.652681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.652699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.666682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.666705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.680590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.680609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.038 [2024-11-07 10:36:56.694405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.038 [2024-11-07 10:36:56.694424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.708712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.708732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.723096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.723114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.737147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.737166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.751559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.751577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.762502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.762520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.777508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.777527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.793284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.793304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.807402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.807420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.821407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.821425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.835478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.835496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.846224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.846242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.861030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.861049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.875281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.875299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.889628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.889645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.903442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.903461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.917547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.917565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.931803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.931821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.943486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.943504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.298 [2024-11-07 10:36:56.958374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.298 [2024-11-07 10:36:56.958393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:56.970277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:56.970295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:56.984724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:56.984742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:56.998867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:56.998885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.012842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.012860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 16399.00 IOPS, 128.12 MiB/s [2024-11-07T09:36:57.226Z] [2024-11-07 10:36:57.026690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.026708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.040999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.041018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.051582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.051600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.066029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.066047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.079819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.079837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.093765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.555 [2024-11-07 10:36:57.093783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.555 [2024-11-07 10:36:57.107439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.107457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.121749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.121767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.135667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.135685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.149619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.149638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.163304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.163322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.177213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.177231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.191331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.191348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.202625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.202643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.556 [2024-11-07 10:36:57.217068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.556 [2024-11-07 10:36:57.217086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.231117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.231136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.242558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.242576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.257059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.257077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.270653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.270671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.284521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.284540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.298723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.298741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.312763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.312781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.327124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.327141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.340781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.340799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.354491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.354509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.368353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.368376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.382246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.382264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.396345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.396364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.410084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.410102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.423895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.423912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.437635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.437653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.451330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.451348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.465064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.465081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.814 [2024-11-07 10:36:57.478875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.814 [2024-11-07 10:36:57.478894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.493032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.493050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.507168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.507187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.521165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.521183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.535056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.535074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.548917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.548935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.562831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.562850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.576845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.576865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.591028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.591047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.605016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.605034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.618694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.618712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.633240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.633263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.648307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.648325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.662591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.662610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.676643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.073 [2024-11-07 10:36:57.676661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.073 [2024-11-07 10:36:57.690314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.074 [2024-11-07 10:36:57.690331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.074 [2024-11-07 10:36:57.704344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.074 [2024-11-07 10:36:57.704362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.074 [2024-11-07 10:36:57.718112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.074 [2024-11-07 10:36:57.718131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.074 [2024-11-07 10:36:57.732323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.074 [2024-11-07 10:36:57.732341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.746854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.746872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.757976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.757995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.772358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.772382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.785926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.785945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.800264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.800283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.814106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.814125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.828426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.828452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.839895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.332 [2024-11-07 10:36:57.839914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.332 [2024-11-07 10:36:57.854138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.854156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.868699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.868717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.884399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.884417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.898473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.898497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.912559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.912577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.926926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.926945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.940755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.940774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.955183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.955202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.969242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.969261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.982815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.982833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.333 [2024-11-07 10:36:57.997123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.333 [2024-11-07 10:36:57.997143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.591 [2024-11-07 10:36:58.011359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.591 [2024-11-07 10:36:58.011379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.591 16543.50 IOPS, 129.25 MiB/s [2024-11-07T09:36:58.262Z] [2024-11-07 10:36:58.022199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.591 [2024-11-07 10:36:58.022218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.591 [2024-11-07 10:36:58.037144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.591 [2024-11-07 10:36:58.037163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.048465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.048483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.063260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.063278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.078319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.078338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.092722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.092741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.106559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.106578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.120567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.120586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.134543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.134562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.148536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.148554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.162547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.162569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.176685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.176703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.190963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.190981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.204752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.204771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.218541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.218560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.232469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.232490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.592 [2024-11-07 10:36:58.246695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.592 [2024-11-07 10:36:58.246714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.261626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.261643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.277288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.277307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.291878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.291897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.302390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.302408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.316882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.316900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.330804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.330822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.345137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.345155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.359330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.359348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.373304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.373322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.387515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.387533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.401721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.401740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.412982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.413000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.427303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.427322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.441832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.441850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.453013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.453030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.467222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.467240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.481021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.481039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.495109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.495127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.851 [2024-11-07 10:36:58.509344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.851 [2024-11-07 10:36:58.509362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.520248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.520266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.534980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.534998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.545657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.545675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.559702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.559720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.573670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.573689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.587336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.587355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.601629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.601656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.610655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.610674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.624805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.624825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.638673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.638691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.652837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.652856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.666621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.666640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.681109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.681127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.692078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.692096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.707107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.707125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.722382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.722400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.736643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.736661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.750455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.750474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.110 [2024-11-07 10:36:58.764616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.110 [2024-11-07 10:36:58.764634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.778835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.778853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.792666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.792684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.806650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.806668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.820568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.820586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.834699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.834717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.848694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.848713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.862410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.862428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.876624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.876642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.890763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.890782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.904180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.904198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.918289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.918307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.932348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.932367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.946160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.946179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.960167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.960185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.974565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.974583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:58.988450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:58.988469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:59.002156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:59.002174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 [2024-11-07 10:36:59.016198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:59.016217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.369 16580.33 IOPS, 129.53 MiB/s [2024-11-07T09:36:59.040Z] [2024-11-07 10:36:59.029943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.369 [2024-11-07 10:36:59.029962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.044215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.044233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.055622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.055640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.070040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.070059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.084008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.084026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.098152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.098171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.112257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.112275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.126272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.126291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.139909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.139927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.153783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.153802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.167794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.167813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.181786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.181805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.195836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.195859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.210135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.210154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.224336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.224355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.235567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.235587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.250660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.250678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.265775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.265794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.279919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.279937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.628 [2024-11-07 10:36:59.294425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.628 [2024-11-07 10:36:59.294450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.309726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.309744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.324081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.324099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.338556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.338575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.349707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.349726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.363960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.363979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.377836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.377855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.392128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.392147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.406161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.406179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.420595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.420614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.431619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.431637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.446007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.446026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.460269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.460292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.471018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.471037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.485722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.485741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.887 [2024-11-07 10:36:59.496995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.887 [2024-11-07 10:36:59.497013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.888 [2024-11-07 10:36:59.511582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.888 [2024-11-07 10:36:59.511601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.888 [2024-11-07 10:36:59.522669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.888 [2024-11-07 10:36:59.522687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.888 [2024-11-07 10:36:59.537160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.888 [2024-11-07 10:36:59.537178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.888 [2024-11-07 10:36:59.550937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.888 [2024-11-07 10:36:59.550957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.154 [2024-11-07 10:36:59.565096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.154 [2024-11-07 10:36:59.565114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.154 [2024-11-07 10:36:59.579316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.154 [2024-11-07 10:36:59.579334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.154 [2024-11-07 10:36:59.593092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.154 [2024-11-07 10:36:59.593110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.607097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.607115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.620875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.620893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.635220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.635239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.646598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.646618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.661212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.661230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.675069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.675087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.688894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.688912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.703243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.703261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.713953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.713975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.728582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.728600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.742758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.742776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.756870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.756889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.770895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.770913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.784550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.784568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.798750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.798767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.155 [2024-11-07 10:36:59.812883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.155 [2024-11-07 10:36:59.812902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.827475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.827494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.843306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.843324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.857913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.857931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.871881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.871899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.885869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.885887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.899573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.899592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.914026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.914044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.924776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.924793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.934629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.934647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.949070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.949088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.963507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.963526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.974546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.974565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:36:59.988507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:36:59.988525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.002351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.002369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.015982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.016002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 16580.00 IOPS, 129.53 MiB/s [2024-11-07T09:37:00.087Z] [2024-11-07 10:37:00.031266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.031287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.042963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.042983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.051968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.051986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.060781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.060800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.416 [2024-11-07 10:37:00.070331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.416 [2024-11-07 10:37:00.070349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.085109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.085128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.099797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.099817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.114093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.114111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.128562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.128581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.139628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.139647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.154665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.154685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.165481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.165500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.180200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.180218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.191427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.191451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.206282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.206300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.216811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.216829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.231342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.231361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.245415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.245441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.256155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.256173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.270704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.270722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.284734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.284753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.298587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.298605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.312859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.312877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.326783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.326801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.675 [2024-11-07 10:37:00.341024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.675 [2024-11-07 10:37:00.341043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.354852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.354871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.369399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.369417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.385356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.385375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.399506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.399524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.413890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.413908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.424700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.424719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.439206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.439225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.453114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.453132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.467153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.467176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.481527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.481546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.495888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.495906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.511526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.511544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.525652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.525671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.539691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.539710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.553714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.553734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.567637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.567655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.582270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.582290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.934 [2024-11-07 10:37:00.597689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.934 [2024-11-07 10:37:00.597710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.612162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.612181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.626350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.626369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.640811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.640830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.651675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.651693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.666388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.666407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.676855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.676874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.691264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.691283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.705517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.705537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.716252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.716270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.730550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.730574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.744940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.744958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.758874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.758893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.773326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.773345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.784263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.784280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.193 [2024-11-07 10:37:00.798635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.193 [2024-11-07 10:37:00.798654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.194 [2024-11-07 10:37:00.812823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.194 [2024-11-07 10:37:00.812841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.194 [2024-11-07 10:37:00.826543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.194 [2024-11-07 10:37:00.826562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.194 [2024-11-07 10:37:00.840190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.194 [2024-11-07 10:37:00.840209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.194 [2024-11-07 10:37:00.854044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.194 [2024-11-07 10:37:00.854062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.867995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.868013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.881821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.881839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.895897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.895917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.909518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.909536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.923871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.923890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.937865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.937884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.952195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.952214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.965933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.965951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.980333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.980351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:00.990885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:00.990911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.005730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.005748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.016065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.016082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 16541.60 IOPS, 129.23 MiB/s [2024-11-07T09:37:01.124Z] [2024-11-07 10:37:01.030274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.030292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 00:08:33.453 Latency(us) 00:08:33.453 [2024-11-07T09:37:01.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.453 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:33.453 Nvme1n1 : 5.01 16543.97 129.25 0.00 0.00 7729.47 3490.50 16640.45 00:08:33.453 [2024-11-07T09:37:01.124Z] =================================================================================================================== 00:08:33.453 [2024-11-07T09:37:01.124Z] Total : 16543.97 129.25 0.00 0.00 7729.47 3490.50 16640.45 00:08:33.453 [2024-11-07 10:37:01.039873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.039890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.051905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.051920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.063950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.063967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.075970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.075986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.087999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.088013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.100029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.100042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.453 [2024-11-07 10:37:01.112062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.453 [2024-11-07 10:37:01.112076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.124094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.124107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.136124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.136139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.148153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.148163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.160182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.160193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.172216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.172228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.184246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.184255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 [2024-11-07 10:37:01.196284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.712 [2024-11-07 10:37:01.196293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2562391) - No such process 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2562391 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 delay0 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.712 10:37:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:33.712 [2024-11-07 10:37:01.354572] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:40.271 Initializing NVMe Controllers 00:08:40.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:40.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:40.271 Initialization complete. Launching workers. 00:08:40.271 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 352 00:08:40.271 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 642, failed to submit 30 00:08:40.271 success 461, unsuccessful 181, failed 0 00:08:40.271 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:40.272 rmmod nvme_tcp 00:08:40.272 rmmod nvme_fabrics 00:08:40.272 rmmod nvme_keyring 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2560532 ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2560532 ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2560532' 00:08:40.272 killing process with pid 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2560532 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.272 10:37:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.804 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.804 00:08:42.804 real 0m31.166s 00:08:42.804 user 0m42.036s 00:08:42.804 sys 0m10.879s 00:08:42.804 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.804 10:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.804 ************************************ 00:08:42.804 END TEST nvmf_zcopy 00:08:42.804 ************************************ 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.804 ************************************ 00:08:42.804 START TEST nvmf_nmic 00:08:42.804 ************************************ 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.804 * Looking for test storage... 00:08:42.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.804 --rc genhtml_branch_coverage=1 00:08:42.804 --rc genhtml_function_coverage=1 00:08:42.804 --rc genhtml_legend=1 00:08:42.804 --rc geninfo_all_blocks=1 00:08:42.804 --rc geninfo_unexecuted_blocks=1 00:08:42.804 00:08:42.804 ' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.804 --rc genhtml_branch_coverage=1 00:08:42.804 --rc genhtml_function_coverage=1 00:08:42.804 --rc genhtml_legend=1 00:08:42.804 --rc geninfo_all_blocks=1 00:08:42.804 --rc geninfo_unexecuted_blocks=1 00:08:42.804 00:08:42.804 ' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.804 --rc genhtml_branch_coverage=1 00:08:42.804 --rc genhtml_function_coverage=1 00:08:42.804 --rc genhtml_legend=1 00:08:42.804 --rc geninfo_all_blocks=1 00:08:42.804 --rc geninfo_unexecuted_blocks=1 00:08:42.804 00:08:42.804 ' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.804 --rc genhtml_branch_coverage=1 00:08:42.804 --rc genhtml_function_coverage=1 00:08:42.804 --rc genhtml_legend=1 00:08:42.804 --rc geninfo_all_blocks=1 00:08:42.804 --rc geninfo_unexecuted_blocks=1 00:08:42.804 00:08:42.804 ' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.804 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.805 10:37:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.071 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.072 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.072 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.072 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:08:48.330 00:08:48.330 --- 10.0.0.2 ping statistics --- 00:08:48.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.330 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:48.330 00:08:48.330 --- 10.0.0.1 ping statistics --- 00:08:48.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.330 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2567785 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2567785 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2567785 ']' 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:48.330 10:37:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.330 [2024-11-07 10:37:15.908775] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:08:48.330 [2024-11-07 10:37:15.908827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.330 [2024-11-07 10:37:15.976882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.589 [2024-11-07 10:37:16.022180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.589 [2024-11-07 10:37:16.022217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.589 [2024-11-07 10:37:16.022225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.589 [2024-11-07 10:37:16.022231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.589 [2024-11-07 10:37:16.022236] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.589 [2024-11-07 10:37:16.023812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.589 [2024-11-07 10:37:16.023909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.589 [2024-11-07 10:37:16.024015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.589 [2024-11-07 10:37:16.024017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 [2024-11-07 10:37:16.161380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 Malloc0 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 [2024-11-07 10:37:16.232228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:48.589 test case1: single bdev can't be used in multiple subsystems 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.589 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.847 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.848 [2024-11-07 10:37:16.264131] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:48.848 [2024-11-07 10:37:16.264154] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:48.848 [2024-11-07 10:37:16.264161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.848 request: 00:08:48.848 { 00:08:48.848 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:48.848 "namespace": { 00:08:48.848 "bdev_name": "Malloc0", 00:08:48.848 "no_auto_visible": false 00:08:48.848 }, 00:08:48.848 "method": "nvmf_subsystem_add_ns", 00:08:48.848 "req_id": 1 00:08:48.848 } 00:08:48.848 Got JSON-RPC error response 00:08:48.848 response: 00:08:48.848 { 00:08:48.848 "code": -32602, 00:08:48.848 "message": "Invalid parameters" 00:08:48.848 } 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:48.848 Adding namespace failed - expected result. 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:48.848 test case2: host connect to nvmf target in multiple paths 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:48.848 [2024-11-07 10:37:16.276275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.848 10:37:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.781 10:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:51.156 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.156 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:51.156 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.156 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:51.156 10:37:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:53.067 10:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:53.067 [global] 00:08:53.067 thread=1 00:08:53.067 invalidate=1 00:08:53.067 rw=write 00:08:53.067 time_based=1 00:08:53.067 runtime=1 00:08:53.067 ioengine=libaio 00:08:53.067 direct=1 00:08:53.067 bs=4096 00:08:53.067 iodepth=1 00:08:53.067 norandommap=0 00:08:53.067 numjobs=1 00:08:53.067 00:08:53.067 verify_dump=1 00:08:53.067 verify_backlog=512 00:08:53.067 verify_state_save=0 00:08:53.067 do_verify=1 00:08:53.067 verify=crc32c-intel 00:08:53.067 [job0] 00:08:53.067 filename=/dev/nvme0n1 00:08:53.067 Could not set queue depth (nvme0n1) 00:08:53.328 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:53.328 fio-3.35 00:08:53.328 Starting 1 thread 00:08:54.702 00:08:54.702 job0: (groupid=0, jobs=1): err= 0: pid=2568856: Thu Nov 7 10:37:21 2024 00:08:54.702 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:08:54.702 slat (nsec): min=6565, max=55299, avg=8073.68, stdev=3805.93 00:08:54.702 clat (usec): min=185, max=41991, avg=1631.48, stdev=7401.71 00:08:54.702 lat (usec): min=193, max=42014, avg=1639.56, stdev=7403.83 00:08:54.703 clat percentiles (usec): 00:08:54.703 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:08:54.703 | 30.00th=[ 233], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:08:54.703 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 404], 00:08:54.703 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:54.703 | 99.99th=[42206] 00:08:54.703 write: IOPS=702, BW=2809KiB/s (2877kB/s)(2812KiB/1001msec); 0 zone resets 00:08:54.703 slat (usec): min=9, max=28326, avg=50.79, stdev=1067.98 00:08:54.703 clat (usec): min=115, max=490, avg=173.94, stdev=32.75 00:08:54.703 lat (usec): min=125, max=28718, avg=224.72, stdev=1076.67 00:08:54.703 clat percentiles (usec): 00:08:54.703 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 139], 00:08:54.703 | 30.00th=[ 182], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 184], 00:08:54.703 | 70.00th=[ 186], 80.00th=[ 186], 90.00th=[ 188], 95.00th=[ 190], 00:08:54.703 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 490], 99.95th=[ 490], 00:08:54.703 | 99.99th=[ 490] 00:08:54.703 bw ( KiB/s): min= 4087, max= 4087, per=100.00%, avg=4087.00, stdev= 0.00, samples=1 00:08:54.703 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:08:54.703 lat (usec) : 250=73.83%, 500=24.69%, 750=0.08% 00:08:54.703 lat (msec) : 50=1.40% 00:08:54.703 cpu : usr=0.50%, sys=1.30%, ctx=1219, majf=0, minf=1 00:08:54.703 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:54.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.703 issued rwts: total=512,703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.703 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:54.703 00:08:54.703 Run status group 0 (all jobs): 00:08:54.703 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:08:54.703 WRITE: bw=2809KiB/s (2877kB/s), 2809KiB/s-2809KiB/s (2877kB/s-2877kB/s), io=2812KiB (2879kB), run=1001-1001msec 00:08:54.703 00:08:54.703 Disk stats (read/write): 00:08:54.703 nvme0n1: ios=253/512, merge=0/0, ticks=1751/97, in_queue=1848, util=98.70% 00:08:54.703 10:37:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.703 rmmod nvme_tcp 00:08:54.703 rmmod nvme_fabrics 00:08:54.703 rmmod nvme_keyring 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2567785 ']' 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2567785 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2567785 ']' 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2567785 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2567785 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2567785' 00:08:54.703 killing process with pid 2567785 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2567785 00:08:54.703 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2567785 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.962 10:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.497 00:08:57.497 real 0m14.510s 00:08:57.497 user 0m32.779s 00:08:57.497 sys 0m4.986s 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 ************************************ 00:08:57.497 END TEST nvmf_nmic 00:08:57.497 ************************************ 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 ************************************ 00:08:57.497 START TEST nvmf_fio_target 00:08:57.497 ************************************ 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:57.497 * Looking for test storage... 00:08:57.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.497 --rc genhtml_branch_coverage=1 00:08:57.497 --rc genhtml_function_coverage=1 00:08:57.497 --rc genhtml_legend=1 00:08:57.497 --rc geninfo_all_blocks=1 00:08:57.497 --rc geninfo_unexecuted_blocks=1 00:08:57.497 00:08:57.497 ' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.497 --rc genhtml_branch_coverage=1 00:08:57.497 --rc genhtml_function_coverage=1 00:08:57.497 --rc genhtml_legend=1 00:08:57.497 --rc geninfo_all_blocks=1 00:08:57.497 --rc geninfo_unexecuted_blocks=1 00:08:57.497 00:08:57.497 ' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.497 --rc genhtml_branch_coverage=1 00:08:57.497 --rc genhtml_function_coverage=1 00:08:57.497 --rc genhtml_legend=1 00:08:57.497 --rc geninfo_all_blocks=1 00:08:57.497 --rc geninfo_unexecuted_blocks=1 00:08:57.497 00:08:57.497 ' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:57.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.497 --rc genhtml_branch_coverage=1 00:08:57.497 --rc genhtml_function_coverage=1 00:08:57.497 --rc genhtml_legend=1 00:08:57.497 --rc geninfo_all_blocks=1 00:08:57.497 --rc geninfo_unexecuted_blocks=1 00:08:57.497 00:08:57.497 ' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.497 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.498 10:37:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:02.763 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:02.764 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:02.764 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:02.764 Found net devices under 0000:86:00.0: cvl_0_0 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:02.764 Found net devices under 0000:86:00.1: cvl_0_1 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.764 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.023 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:09:03.023 00:09:03.023 --- 10.0.0.2 ping statistics --- 00:09:03.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.023 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:09:03.024 00:09:03.024 --- 10.0.0.1 ping statistics --- 00:09:03.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.024 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2572622 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2572622 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2572622 ']' 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.024 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.024 [2024-11-07 10:37:30.659443] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:03.024 [2024-11-07 10:37:30.659488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.282 [2024-11-07 10:37:30.725913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.282 [2024-11-07 10:37:30.768848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.282 [2024-11-07 10:37:30.768886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.282 [2024-11-07 10:37:30.768893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.282 [2024-11-07 10:37:30.768899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.282 [2024-11-07 10:37:30.768904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.282 [2024-11-07 10:37:30.770474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.282 [2024-11-07 10:37:30.770516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.282 [2024-11-07 10:37:30.770579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.282 [2024-11-07 10:37:30.770581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.282 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.282 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:03.282 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.282 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:03.282 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:03.283 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.283 10:37:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:03.540 [2024-11-07 10:37:31.076582] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.540 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.797 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:03.798 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.055 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:04.055 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.313 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:04.313 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.570 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:04.570 10:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:04.570 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:04.828 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:04.828 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.085 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:05.085 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.343 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:05.343 10:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:05.600 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.600 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:05.600 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.858 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:05.858 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:06.115 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.376 [2024-11-07 10:37:33.793156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.376 10:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:06.376 10:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:06.676 10:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:07.687 10:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:10.211 10:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:10.211 [global] 00:09:10.211 thread=1 00:09:10.211 invalidate=1 00:09:10.211 rw=write 00:09:10.211 time_based=1 00:09:10.211 runtime=1 00:09:10.211 ioengine=libaio 00:09:10.211 direct=1 00:09:10.211 bs=4096 00:09:10.211 iodepth=1 00:09:10.211 norandommap=0 00:09:10.211 numjobs=1 00:09:10.211 00:09:10.211 verify_dump=1 00:09:10.211 verify_backlog=512 00:09:10.211 verify_state_save=0 00:09:10.211 do_verify=1 00:09:10.211 verify=crc32c-intel 00:09:10.211 [job0] 00:09:10.211 filename=/dev/nvme0n1 00:09:10.211 [job1] 00:09:10.211 filename=/dev/nvme0n2 00:09:10.211 [job2] 00:09:10.211 filename=/dev/nvme0n3 00:09:10.211 [job3] 00:09:10.211 filename=/dev/nvme0n4 00:09:10.211 Could not set queue depth (nvme0n1) 00:09:10.211 Could not set queue depth (nvme0n2) 00:09:10.211 Could not set queue depth (nvme0n3) 00:09:10.211 Could not set queue depth (nvme0n4) 00:09:10.211 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.211 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.211 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.211 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.211 fio-3.35 00:09:10.211 Starting 4 threads 00:09:11.605 00:09:11.605 job0: (groupid=0, jobs=1): err= 0: pid=2573977: Thu Nov 7 10:37:38 2024 00:09:11.605 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:09:11.605 slat (nsec): min=2295, max=52141, avg=4890.46, stdev=3540.45 00:09:11.605 clat (usec): min=149, max=535, avg=215.35, stdev=31.75 00:09:11.605 lat (usec): min=151, max=560, avg=220.24, stdev=33.19 00:09:11.605 clat percentiles (usec): 00:09:11.605 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:09:11.605 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 219], 00:09:11.605 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 265], 00:09:11.605 | 99.00th=[ 285], 99.50th=[ 334], 99.90th=[ 478], 99.95th=[ 486], 00:09:11.605 | 99.99th=[ 537] 00:09:11.605 write: IOPS=2755, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1000msec); 0 zone resets 00:09:11.605 slat (nsec): min=3369, max=42873, avg=7372.95, stdev=5250.46 00:09:11.605 clat (usec): min=95, max=358, avg=147.16, stdev=23.04 00:09:11.605 lat (usec): min=101, max=371, avg=154.53, stdev=26.25 00:09:11.605 clat percentiles (usec): 00:09:11.605 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 129], 00:09:11.605 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:09:11.605 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 186], 00:09:11.605 | 99.00th=[ 208], 99.50th=[ 241], 99.90th=[ 285], 99.95th=[ 318], 00:09:11.605 | 99.99th=[ 359] 00:09:11.605 bw ( KiB/s): min=12288, max=12288, per=48.12%, avg=12288.00, stdev= 0.00, samples=1 00:09:11.605 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:11.605 lat (usec) : 100=0.06%, 250=93.47%, 500=6.45%, 750=0.02% 00:09:11.605 cpu : usr=4.20%, sys=3.20%, ctx=5317, majf=0, minf=2 00:09:11.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.605 issued rwts: total=2560,2755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.605 job1: (groupid=0, jobs=1): err= 0: pid=2573978: Thu Nov 7 10:37:38 2024 00:09:11.605 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:11.605 slat (nsec): min=7414, max=38836, avg=8945.49, stdev=2216.80 00:09:11.605 clat (usec): min=175, max=41167, avg=764.82, stdev=4564.52 00:09:11.605 lat (usec): min=184, max=41176, avg=773.77, stdev=4564.83 00:09:11.605 clat percentiles (usec): 00:09:11.605 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:09:11.605 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 249], 00:09:11.605 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 302], 95.00th=[ 437], 00:09:11.605 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:11.605 | 99.99th=[41157] 00:09:11.605 write: IOPS=1125, BW=4503KiB/s (4612kB/s)(4508KiB/1001msec); 0 zone resets 00:09:11.605 slat (nsec): min=10657, max=44509, avg=12619.13, stdev=2649.01 00:09:11.605 clat (usec): min=110, max=360, avg=165.76, stdev=33.40 00:09:11.605 lat (usec): min=131, max=390, avg=178.37, stdev=33.90 00:09:11.605 clat percentiles (usec): 00:09:11.605 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:09:11.605 | 30.00th=[ 141], 40.00th=[ 153], 50.00th=[ 165], 60.00th=[ 174], 00:09:11.605 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 200], 95.00th=[ 221], 00:09:11.605 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 363], 00:09:11.605 | 99.99th=[ 363] 00:09:11.605 bw ( KiB/s): min= 4096, max= 4096, per=16.04%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.605 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.605 lat (usec) : 250=79.50%, 500=19.57%, 750=0.33% 00:09:11.605 lat (msec) : 50=0.60% 00:09:11.605 cpu : usr=1.30%, sys=4.10%, ctx=2152, majf=0, minf=1 00:09:11.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 issued rwts: total=1024,1127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.606 job2: (groupid=0, jobs=1): err= 0: pid=2573979: Thu Nov 7 10:37:38 2024 00:09:11.606 read: IOPS=522, BW=2089KiB/s (2139kB/s)(2108KiB/1009msec) 00:09:11.606 slat (nsec): min=6664, max=23969, avg=8168.29, stdev=2543.81 00:09:11.606 clat (usec): min=220, max=41998, avg=1500.90, stdev=6794.80 00:09:11.606 lat (usec): min=228, max=42020, avg=1509.06, stdev=6797.06 00:09:11.606 clat percentiles (usec): 00:09:11.606 | 1.00th=[ 225], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 253], 00:09:11.606 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 388], 60.00th=[ 396], 00:09:11.606 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 453], 00:09:11.606 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:11.606 | 99.99th=[42206] 00:09:11.606 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:09:11.606 slat (nsec): min=9302, max=49735, avg=10888.93, stdev=2324.09 00:09:11.606 clat (usec): min=135, max=393, avg=194.23, stdev=37.40 00:09:11.606 lat (usec): min=145, max=443, avg=205.12, stdev=37.64 00:09:11.606 clat percentiles (usec): 00:09:11.606 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:09:11.606 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 190], 00:09:11.606 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 243], 95.00th=[ 245], 00:09:11.606 | 99.00th=[ 251], 99.50th=[ 297], 99.90th=[ 330], 99.95th=[ 396], 00:09:11.606 | 99.99th=[ 396] 00:09:11.606 bw ( KiB/s): min= 8192, max= 8192, per=32.08%, avg=8192.00, stdev= 0.00, samples=1 00:09:11.606 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:11.606 lat (usec) : 250=71.12%, 500=27.79%, 750=0.13% 00:09:11.606 lat (msec) : 50=0.97% 00:09:11.606 cpu : usr=0.30%, sys=1.98%, ctx=1551, majf=0, minf=2 00:09:11.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.606 job3: (groupid=0, jobs=1): err= 0: pid=2573980: Thu Nov 7 10:37:38 2024 00:09:11.606 read: IOPS=1348, BW=5393KiB/s (5522kB/s)(5420KiB/1005msec) 00:09:11.606 slat (nsec): min=6577, max=25302, avg=7735.11, stdev=1622.53 00:09:11.606 clat (usec): min=187, max=42122, avg=524.05, stdev=3336.42 00:09:11.606 lat (usec): min=195, max=42129, avg=531.79, stdev=3336.85 00:09:11.606 clat percentiles (usec): 00:09:11.606 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:09:11.606 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:09:11.606 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 445], 00:09:11.606 | 99.00th=[ 529], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:11.606 | 99.99th=[42206] 00:09:11.606 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:09:11.606 slat (nsec): min=9560, max=39043, avg=10978.78, stdev=1453.57 00:09:11.606 clat (usec): min=129, max=384, avg=169.71, stdev=29.33 00:09:11.606 lat (usec): min=139, max=422, avg=180.69, stdev=29.64 00:09:11.606 clat percentiles (usec): 00:09:11.606 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:09:11.606 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:09:11.606 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 237], 95.00th=[ 243], 00:09:11.606 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 314], 99.95th=[ 383], 00:09:11.606 | 99.99th=[ 383] 00:09:11.606 bw ( KiB/s): min= 4096, max= 8192, per=24.06%, avg=6144.00, stdev=2896.31, samples=2 00:09:11.606 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:11.606 lat (usec) : 250=86.41%, 500=12.80%, 750=0.48% 00:09:11.606 lat (msec) : 50=0.31% 00:09:11.606 cpu : usr=1.69%, sys=2.49%, ctx=2892, majf=0, minf=1 00:09:11.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.606 issued rwts: total=1355,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.606 00:09:11.606 Run status group 0 (all jobs): 00:09:11.606 READ: bw=21.2MiB/s (22.2MB/s), 2089KiB/s-10.0MiB/s (2139kB/s-10.5MB/s), io=21.4MiB (22.4MB), run=1000-1009msec 00:09:11.606 WRITE: bw=24.9MiB/s (26.2MB/s), 4059KiB/s-10.8MiB/s (4157kB/s-11.3MB/s), io=25.2MiB (26.4MB), run=1000-1009msec 00:09:11.606 00:09:11.606 Disk stats (read/write): 00:09:11.606 nvme0n1: ios=2098/2308, merge=0/0, ticks=444/316, in_queue=760, util=81.96% 00:09:11.606 nvme0n2: ios=857/1024, merge=0/0, ticks=980/154, in_queue=1134, util=98.04% 00:09:11.606 nvme0n3: ios=522/1024, merge=0/0, ticks=585/201, in_queue=786, util=87.50% 00:09:11.606 nvme0n4: ios=1081/1301, merge=0/0, ticks=1405/208, in_queue=1613, util=97.90% 00:09:11.606 10:37:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:11.606 [global] 00:09:11.606 thread=1 00:09:11.606 invalidate=1 00:09:11.606 rw=randwrite 00:09:11.606 time_based=1 00:09:11.606 runtime=1 00:09:11.606 ioengine=libaio 00:09:11.606 direct=1 00:09:11.606 bs=4096 00:09:11.606 iodepth=1 00:09:11.606 norandommap=0 00:09:11.606 numjobs=1 00:09:11.606 00:09:11.606 verify_dump=1 00:09:11.606 verify_backlog=512 00:09:11.606 verify_state_save=0 00:09:11.606 do_verify=1 00:09:11.606 verify=crc32c-intel 00:09:11.606 [job0] 00:09:11.606 filename=/dev/nvme0n1 00:09:11.606 [job1] 00:09:11.606 filename=/dev/nvme0n2 00:09:11.606 [job2] 00:09:11.606 filename=/dev/nvme0n3 00:09:11.606 [job3] 00:09:11.606 filename=/dev/nvme0n4 00:09:11.606 Could not set queue depth (nvme0n1) 00:09:11.606 Could not set queue depth (nvme0n2) 00:09:11.606 Could not set queue depth (nvme0n3) 00:09:11.606 Could not set queue depth (nvme0n4) 00:09:11.865 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.865 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.865 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.865 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.865 fio-3.35 00:09:11.865 Starting 4 threads 00:09:13.236 00:09:13.236 job0: (groupid=0, jobs=1): err= 0: pid=2574348: Thu Nov 7 10:37:40 2024 00:09:13.236 read: IOPS=28, BW=114KiB/s (117kB/s)(116KiB/1017msec) 00:09:13.236 slat (nsec): min=8750, max=38338, avg=20420.72, stdev=6905.64 00:09:13.236 clat (usec): min=324, max=42032, avg=31251.09, stdev=17711.76 00:09:13.236 lat (usec): min=347, max=42055, avg=31271.51, stdev=17711.84 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 326], 5.00th=[ 367], 10.00th=[ 367], 20.00th=[ 469], 00:09:13.236 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:13.236 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:13.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:13.236 | 99.99th=[42206] 00:09:13.236 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:09:13.236 slat (nsec): min=9461, max=61265, avg=10559.72, stdev=2520.15 00:09:13.236 clat (usec): min=128, max=396, avg=201.61, stdev=31.04 00:09:13.236 lat (usec): min=138, max=440, avg=212.17, stdev=31.65 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 178], 00:09:13.236 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:09:13.236 | 70.00th=[ 210], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 247], 00:09:13.236 | 99.00th=[ 285], 99.50th=[ 371], 99.90th=[ 396], 99.95th=[ 396], 00:09:13.236 | 99.99th=[ 396] 00:09:13.236 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.236 lat (usec) : 250=90.57%, 500=5.36% 00:09:13.236 lat (msec) : 50=4.07% 00:09:13.236 cpu : usr=0.10%, sys=0.69%, ctx=542, majf=0, minf=1 00:09:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.236 job1: (groupid=0, jobs=1): err= 0: pid=2574349: Thu Nov 7 10:37:40 2024 00:09:13.236 read: IOPS=22, BW=88.7KiB/s (90.8kB/s)(92.0KiB/1037msec) 00:09:13.236 slat (nsec): min=9761, max=23636, avg=22278.39, stdev=2858.50 00:09:13.236 clat (usec): min=40819, max=42013, avg=41137.41, stdev=391.40 00:09:13.236 lat (usec): min=40829, max=42032, avg=41159.69, stdev=391.32 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:13.236 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:13.236 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:13.236 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:13.236 | 99.99th=[42206] 00:09:13.236 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:09:13.236 slat (nsec): min=8975, max=34071, avg=10964.22, stdev=1617.04 00:09:13.236 clat (usec): min=138, max=379, avg=162.25, stdev=16.08 00:09:13.236 lat (usec): min=148, max=413, avg=173.22, stdev=16.74 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:13.236 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:09:13.236 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:09:13.236 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 379], 99.95th=[ 379], 00:09:13.236 | 99.99th=[ 379] 00:09:13.236 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.236 lat (usec) : 250=95.33%, 500=0.37% 00:09:13.236 lat (msec) : 50=4.30% 00:09:13.236 cpu : usr=0.29%, sys=0.58%, ctx=536, majf=0, minf=1 00:09:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.236 job2: (groupid=0, jobs=1): err= 0: pid=2574350: Thu Nov 7 10:37:40 2024 00:09:13.236 read: IOPS=2278, BW=9115KiB/s (9334kB/s)(9124KiB/1001msec) 00:09:13.236 slat (nsec): min=7378, max=41313, avg=8539.00, stdev=1533.41 00:09:13.236 clat (usec): min=188, max=447, avg=236.04, stdev=18.55 00:09:13.236 lat (usec): min=196, max=456, avg=244.58, stdev=18.59 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:09:13.236 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:09:13.236 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:09:13.236 | 99.00th=[ 273], 99.50th=[ 277], 99.90th=[ 416], 99.95th=[ 429], 00:09:13.236 | 99.99th=[ 449] 00:09:13.236 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:13.236 slat (nsec): min=10421, max=48237, avg=11688.89, stdev=1963.85 00:09:13.236 clat (usec): min=122, max=1333, avg=155.51, stdev=30.68 00:09:13.236 lat (usec): min=135, max=1344, avg=167.20, stdev=30.76 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:09:13.236 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:09:13.236 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 182], 95.00th=[ 192], 00:09:13.236 | 99.00th=[ 239], 99.50th=[ 243], 99.90th=[ 265], 99.95th=[ 371], 00:09:13.236 | 99.99th=[ 1336] 00:09:13.236 bw ( KiB/s): min=11912, max=11912, per=75.40%, avg=11912.00, stdev= 0.00, samples=1 00:09:13.236 iops : min= 2978, max= 2978, avg=2978.00, stdev= 0.00, samples=1 00:09:13.236 lat (usec) : 250=89.71%, 500=10.27% 00:09:13.236 lat (msec) : 2=0.02% 00:09:13.236 cpu : usr=4.40%, sys=7.30%, ctx=4842, majf=0, minf=1 00:09:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 issued rwts: total=2281,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.236 job3: (groupid=0, jobs=1): err= 0: pid=2574352: Thu Nov 7 10:37:40 2024 00:09:13.236 read: IOPS=162, BW=649KiB/s (665kB/s)(668KiB/1029msec) 00:09:13.236 slat (nsec): min=7787, max=29692, avg=10629.57, stdev=5166.86 00:09:13.236 clat (usec): min=250, max=41973, avg=5485.15, stdev=13523.70 00:09:13.236 lat (usec): min=259, max=42002, avg=5495.78, stdev=13527.01 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 269], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 359], 00:09:13.236 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:09:13.236 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[41157], 95.00th=[41157], 00:09:13.236 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:13.236 | 99.99th=[42206] 00:09:13.236 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:13.236 slat (nsec): min=9401, max=33545, avg=12116.45, stdev=2136.48 00:09:13.236 clat (usec): min=147, max=606, avg=200.12, stdev=36.34 00:09:13.236 lat (usec): min=158, max=617, avg=212.23, stdev=36.37 00:09:13.236 clat percentiles (usec): 00:09:13.236 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:09:13.236 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:09:13.236 | 70.00th=[ 204], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 265], 00:09:13.236 | 99.00th=[ 326], 99.50th=[ 367], 99.90th=[ 611], 99.95th=[ 611], 00:09:13.236 | 99.99th=[ 611] 00:09:13.236 bw ( KiB/s): min= 4096, max= 4096, per=25.93%, avg=4096.00, stdev= 0.00, samples=1 00:09:13.236 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:13.236 lat (usec) : 250=70.40%, 500=26.36%, 750=0.15% 00:09:13.236 lat (msec) : 50=3.09% 00:09:13.236 cpu : usr=0.68%, sys=0.97%, ctx=680, majf=0, minf=1 00:09:13.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.236 issued rwts: total=167,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.236 00:09:13.236 Run status group 0 (all jobs): 00:09:13.236 READ: bw=9643KiB/s (9875kB/s), 88.7KiB/s-9115KiB/s (90.8kB/s-9334kB/s), io=9.77MiB (10.2MB), run=1001-1037msec 00:09:13.236 WRITE: bw=15.4MiB/s (16.2MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1037msec 00:09:13.236 00:09:13.236 Disk stats (read/write): 00:09:13.236 nvme0n1: ios=75/512, merge=0/0, ticks=1587/98, in_queue=1685, util=94.18% 00:09:13.236 nvme0n2: ios=51/512, merge=0/0, ticks=1688/82, in_queue=1770, util=97.97% 00:09:13.236 nvme0n3: ios=2075/2048, merge=0/0, ticks=1007/305, in_queue=1312, util=96.67% 00:09:13.236 nvme0n4: ios=187/512, merge=0/0, ticks=1657/96, in_queue=1753, util=98.43% 00:09:13.236 10:37:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:13.236 [global] 00:09:13.236 thread=1 00:09:13.236 invalidate=1 00:09:13.236 rw=write 00:09:13.236 time_based=1 00:09:13.236 runtime=1 00:09:13.236 ioengine=libaio 00:09:13.236 direct=1 00:09:13.236 bs=4096 00:09:13.236 iodepth=128 00:09:13.236 norandommap=0 00:09:13.236 numjobs=1 00:09:13.236 00:09:13.236 verify_dump=1 00:09:13.236 verify_backlog=512 00:09:13.236 verify_state_save=0 00:09:13.237 do_verify=1 00:09:13.237 verify=crc32c-intel 00:09:13.237 [job0] 00:09:13.237 filename=/dev/nvme0n1 00:09:13.237 [job1] 00:09:13.237 filename=/dev/nvme0n2 00:09:13.237 [job2] 00:09:13.237 filename=/dev/nvme0n3 00:09:13.237 [job3] 00:09:13.237 filename=/dev/nvme0n4 00:09:13.237 Could not set queue depth (nvme0n1) 00:09:13.237 Could not set queue depth (nvme0n2) 00:09:13.237 Could not set queue depth (nvme0n3) 00:09:13.237 Could not set queue depth (nvme0n4) 00:09:13.237 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.237 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.237 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.237 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:13.237 fio-3.35 00:09:13.237 Starting 4 threads 00:09:14.610 00:09:14.610 job0: (groupid=0, jobs=1): err= 0: pid=2574735: Thu Nov 7 10:37:42 2024 00:09:14.610 read: IOPS=3733, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1005msec) 00:09:14.610 slat (nsec): min=1077, max=20087k, avg=138269.71, stdev=866145.55 00:09:14.610 clat (usec): min=451, max=43319, avg=17675.12, stdev=6212.10 00:09:14.610 lat (usec): min=4395, max=43342, avg=17813.39, stdev=6273.96 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[ 6128], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[12911], 00:09:14.610 | 30.00th=[14091], 40.00th=[14615], 50.00th=[16057], 60.00th=[18482], 00:09:14.610 | 70.00th=[20579], 80.00th=[21890], 90.00th=[26608], 95.00th=[28705], 00:09:14.610 | 99.00th=[35390], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:09:14.610 | 99.99th=[43254] 00:09:14.610 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:14.610 slat (nsec): min=1882, max=9117.3k, avg=111008.76, stdev=614594.82 00:09:14.610 clat (usec): min=936, max=66935, avg=14923.01, stdev=8533.71 00:09:14.610 lat (usec): min=972, max=66941, avg=15034.02, stdev=8565.86 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[ 3851], 5.00th=[ 6718], 10.00th=[ 9896], 20.00th=[10421], 00:09:14.610 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11994], 60.00th=[13435], 00:09:14.610 | 70.00th=[15795], 80.00th=[19792], 90.00th=[22152], 95.00th=[26084], 00:09:14.610 | 99.00th=[56886], 99.50th=[63701], 99.90th=[66847], 99.95th=[66847], 00:09:14.610 | 99.99th=[66847] 00:09:14.610 bw ( KiB/s): min=15704, max=17098, per=22.89%, avg=16401.00, stdev=985.71, samples=2 00:09:14.610 iops : min= 3926, max= 4274, avg=4100.00, stdev=246.07, samples=2 00:09:14.610 lat (usec) : 500=0.01%, 1000=0.01% 00:09:14.610 lat (msec) : 2=0.01%, 4=0.85%, 10=8.72%, 20=65.32%, 50=23.96% 00:09:14.610 lat (msec) : 100=1.12% 00:09:14.610 cpu : usr=3.29%, sys=3.69%, ctx=317, majf=0, minf=2 00:09:14.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:14.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.610 issued rwts: total=3752,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.610 job1: (groupid=0, jobs=1): err= 0: pid=2574745: Thu Nov 7 10:37:42 2024 00:09:14.610 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:09:14.610 slat (nsec): min=1048, max=9864.0k, avg=76326.42, stdev=530242.03 00:09:14.610 clat (usec): min=1741, max=27092, avg=10529.23, stdev=2977.82 00:09:14.610 lat (usec): min=1749, max=27098, avg=10605.56, stdev=3004.39 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[ 3261], 5.00th=[ 4948], 10.00th=[ 7177], 20.00th=[ 8979], 00:09:14.610 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10683], 00:09:14.610 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13304], 95.00th=[15401], 00:09:14.610 | 99.00th=[20579], 99.50th=[23987], 99.90th=[24773], 99.95th=[27132], 00:09:14.610 | 99.99th=[27132] 00:09:14.610 write: IOPS=5825, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1005msec); 0 zone resets 00:09:14.610 slat (nsec): min=1874, max=9616.6k, avg=87326.31, stdev=523827.40 00:09:14.610 clat (usec): min=850, max=51986, avg=11586.30, stdev=6559.15 00:09:14.610 lat (usec): min=2897, max=51992, avg=11673.63, stdev=6597.46 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[ 3294], 5.00th=[ 5407], 10.00th=[ 6587], 20.00th=[ 8094], 00:09:14.610 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10421], 00:09:14.610 | 70.00th=[11207], 80.00th=[12518], 90.00th=[19006], 95.00th=[23462], 00:09:14.610 | 99.00th=[40109], 99.50th=[46400], 99.90th=[52167], 99.95th=[52167], 00:09:14.610 | 99.99th=[52167] 00:09:14.610 bw ( KiB/s): min=20912, max=24912, per=31.98%, avg=22912.00, stdev=2828.43, samples=2 00:09:14.610 iops : min= 5228, max= 6228, avg=5728.00, stdev=707.11, samples=2 00:09:14.610 lat (usec) : 1000=0.01% 00:09:14.610 lat (msec) : 2=0.14%, 4=2.74%, 10=42.01%, 20=50.02%, 50=4.94% 00:09:14.610 lat (msec) : 100=0.14% 00:09:14.610 cpu : usr=4.48%, sys=5.28%, ctx=495, majf=0, minf=2 00:09:14.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:14.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.610 issued rwts: total=5632,5855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.610 job2: (groupid=0, jobs=1): err= 0: pid=2574777: Thu Nov 7 10:37:42 2024 00:09:14.610 read: IOPS=3267, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1013msec) 00:09:14.610 slat (nsec): min=1697, max=8869.8k, avg=145003.98, stdev=744724.76 00:09:14.610 clat (usec): min=8335, max=35971, avg=18567.52, stdev=5274.99 00:09:14.610 lat (usec): min=8341, max=35978, avg=18712.53, stdev=5333.16 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[ 9765], 5.00th=[11338], 10.00th=[11863], 20.00th=[12911], 00:09:14.610 | 30.00th=[14353], 40.00th=[16188], 50.00th=[19530], 60.00th=[20841], 00:09:14.610 | 70.00th=[21890], 80.00th=[22938], 90.00th=[24773], 95.00th=[26608], 00:09:14.610 | 99.00th=[30802], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:09:14.610 | 99.99th=[35914] 00:09:14.610 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:09:14.610 slat (usec): min=2, max=9415, avg=138.76, stdev=702.70 00:09:14.610 clat (usec): min=8267, max=47381, avg=18565.62, stdev=6587.64 00:09:14.610 lat (usec): min=9433, max=47393, avg=18704.38, stdev=6644.85 00:09:14.610 clat percentiles (usec): 00:09:14.610 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[13698], 00:09:14.610 | 30.00th=[14615], 40.00th=[16319], 50.00th=[17433], 60.00th=[19006], 00:09:14.610 | 70.00th=[20841], 80.00th=[21627], 90.00th=[23725], 95.00th=[34341], 00:09:14.610 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47449], 99.95th=[47449], 00:09:14.610 | 99.99th=[47449] 00:09:14.610 bw ( KiB/s): min=12296, max=16408, per=20.03%, avg=14352.00, stdev=2907.62, samples=2 00:09:14.610 iops : min= 3074, max= 4102, avg=3588.00, stdev=726.91, samples=2 00:09:14.611 lat (msec) : 10=1.17%, 20=59.54%, 50=39.28% 00:09:14.611 cpu : usr=4.25%, sys=3.66%, ctx=359, majf=0, minf=1 00:09:14.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:14.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.611 issued rwts: total=3310,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.611 job3: (groupid=0, jobs=1): err= 0: pid=2574789: Thu Nov 7 10:37:42 2024 00:09:14.611 read: IOPS=4307, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1008msec) 00:09:14.611 slat (nsec): min=1147, max=19469k, avg=115430.00, stdev=789257.85 00:09:14.611 clat (usec): min=2440, max=54176, avg=15209.94, stdev=7546.11 00:09:14.611 lat (usec): min=4523, max=54184, avg=15325.37, stdev=7594.44 00:09:14.611 clat percentiles (usec): 00:09:14.611 | 1.00th=[ 8094], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11207], 00:09:14.611 | 30.00th=[11469], 40.00th=[11600], 50.00th=[12387], 60.00th=[13173], 00:09:14.611 | 70.00th=[14222], 80.00th=[18220], 90.00th=[23200], 95.00th=[33817], 00:09:14.611 | 99.00th=[50070], 99.50th=[52167], 99.90th=[52167], 99.95th=[54264], 00:09:14.611 | 99.99th=[54264] 00:09:14.611 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:09:14.611 slat (nsec): min=1901, max=17919k, avg=103709.07, stdev=721305.38 00:09:14.611 clat (usec): min=1152, max=57033, avg=13242.28, stdev=6940.36 00:09:14.611 lat (usec): min=1162, max=57064, avg=13345.99, stdev=7002.98 00:09:14.611 clat percentiles (usec): 00:09:14.611 | 1.00th=[ 5604], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[10159], 00:09:14.611 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:09:14.611 | 70.00th=[11863], 80.00th=[13829], 90.00th=[17695], 95.00th=[30802], 00:09:14.611 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46400], 99.95th=[51119], 00:09:14.611 | 99.99th=[56886] 00:09:14.611 bw ( KiB/s): min=16248, max=20616, per=25.73%, avg=18432.00, stdev=3088.64, samples=2 00:09:14.611 iops : min= 4062, max= 5154, avg=4608.00, stdev=772.16, samples=2 00:09:14.611 lat (msec) : 2=0.20%, 4=0.08%, 10=13.43%, 20=74.41%, 50=11.50% 00:09:14.611 lat (msec) : 100=0.38% 00:09:14.611 cpu : usr=3.57%, sys=4.67%, ctx=379, majf=0, minf=1 00:09:14.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:14.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.611 issued rwts: total=4342,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.611 00:09:14.611 Run status group 0 (all jobs): 00:09:14.611 READ: bw=65.7MiB/s (68.9MB/s), 12.8MiB/s-21.9MiB/s (13.4MB/s-23.0MB/s), io=66.5MiB (69.8MB), run=1005-1013msec 00:09:14.611 WRITE: bw=70.0MiB/s (73.4MB/s), 13.8MiB/s-22.8MiB/s (14.5MB/s-23.9MB/s), io=70.9MiB (74.3MB), run=1005-1013msec 00:09:14.611 00:09:14.611 Disk stats (read/write): 00:09:14.611 nvme0n1: ios=3093/3527, merge=0/0, ticks=21078/20135, in_queue=41213, util=96.59% 00:09:14.611 nvme0n2: ios=4397/4608, merge=0/0, ticks=37090/34786, in_queue=71876, util=85.39% 00:09:14.611 nvme0n3: ios=2680/3072, merge=0/0, ticks=16178/17212, in_queue=33390, util=99.78% 00:09:14.611 nvme0n4: ios=3129/3584, merge=0/0, ticks=18418/18834, in_queue=37252, util=97.24% 00:09:14.611 10:37:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:14.611 [global] 00:09:14.611 thread=1 00:09:14.611 invalidate=1 00:09:14.611 rw=randwrite 00:09:14.611 time_based=1 00:09:14.611 runtime=1 00:09:14.611 ioengine=libaio 00:09:14.611 direct=1 00:09:14.611 bs=4096 00:09:14.611 iodepth=128 00:09:14.611 norandommap=0 00:09:14.611 numjobs=1 00:09:14.611 00:09:14.611 verify_dump=1 00:09:14.611 verify_backlog=512 00:09:14.611 verify_state_save=0 00:09:14.611 do_verify=1 00:09:14.611 verify=crc32c-intel 00:09:14.611 [job0] 00:09:14.611 filename=/dev/nvme0n1 00:09:14.611 [job1] 00:09:14.611 filename=/dev/nvme0n2 00:09:14.611 [job2] 00:09:14.611 filename=/dev/nvme0n3 00:09:14.611 [job3] 00:09:14.611 filename=/dev/nvme0n4 00:09:14.611 Could not set queue depth (nvme0n1) 00:09:14.611 Could not set queue depth (nvme0n2) 00:09:14.611 Could not set queue depth (nvme0n3) 00:09:14.611 Could not set queue depth (nvme0n4) 00:09:14.869 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.869 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.869 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.869 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:14.869 fio-3.35 00:09:14.869 Starting 4 threads 00:09:16.243 00:09:16.243 job0: (groupid=0, jobs=1): err= 0: pid=2575217: Thu Nov 7 10:37:43 2024 00:09:16.243 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:16.243 slat (nsec): min=1332, max=26848k, avg=110117.76, stdev=878268.30 00:09:16.243 clat (usec): min=8198, max=64424, avg=14649.19, stdev=8597.76 00:09:16.244 lat (usec): min=8207, max=64450, avg=14759.31, stdev=8674.08 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:09:16.244 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:09:16.244 | 70.00th=[12125], 80.00th=[14222], 90.00th=[29492], 95.00th=[35390], 00:09:16.244 | 99.00th=[51643], 99.50th=[51643], 99.90th=[51643], 99.95th=[53740], 00:09:16.244 | 99.99th=[64226] 00:09:16.244 write: IOPS=4337, BW=16.9MiB/s (17.8MB/s)(17.0MiB/1005msec); 0 zone resets 00:09:16.244 slat (usec): min=2, max=28099, avg=118.42, stdev=937.11 00:09:16.244 clat (usec): min=1432, max=49253, avg=15421.97, stdev=8135.45 00:09:16.244 lat (usec): min=1444, max=49264, avg=15540.39, stdev=8210.69 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 5932], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[ 9634], 00:09:16.244 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10945], 60.00th=[14746], 00:09:16.244 | 70.00th=[17171], 80.00th=[21890], 90.00th=[25297], 95.00th=[27919], 00:09:16.244 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:09:16.244 | 99.99th=[49021] 00:09:16.244 bw ( KiB/s): min=13712, max=20136, per=27.15%, avg=16924.00, stdev=4542.45, samples=2 00:09:16.244 iops : min= 3428, max= 5034, avg=4231.00, stdev=1135.61, samples=2 00:09:16.244 lat (msec) : 2=0.02%, 4=0.28%, 10=21.63%, 20=59.74%, 50=17.55% 00:09:16.244 lat (msec) : 100=0.77% 00:09:16.244 cpu : usr=3.29%, sys=6.57%, ctx=260, majf=0, minf=1 00:09:16.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.244 issued rwts: total=4096,4359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.244 job1: (groupid=0, jobs=1): err= 0: pid=2575243: Thu Nov 7 10:37:43 2024 00:09:16.244 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:09:16.244 slat (nsec): min=1724, max=7174.7k, avg=158695.42, stdev=787124.16 00:09:16.244 clat (usec): min=10261, max=41338, avg=21091.14, stdev=7022.07 00:09:16.244 lat (usec): min=10270, max=41366, avg=21249.84, stdev=7100.20 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[12387], 5.00th=[13173], 10.00th=[14091], 20.00th=[14484], 00:09:16.244 | 30.00th=[14746], 40.00th=[16712], 50.00th=[19530], 60.00th=[21890], 00:09:16.244 | 70.00th=[25035], 80.00th=[28705], 90.00th=[31851], 95.00th=[33817], 00:09:16.244 | 99.00th=[35914], 99.50th=[36439], 99.90th=[39584], 99.95th=[40633], 00:09:16.244 | 99.99th=[41157] 00:09:16.244 write: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1005msec); 0 zone resets 00:09:16.244 slat (usec): min=2, max=18821, avg=139.28, stdev=818.49 00:09:16.244 clat (usec): min=866, max=49306, avg=17969.58, stdev=7285.22 00:09:16.244 lat (usec): min=877, max=49342, avg=18108.86, stdev=7336.16 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 5211], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[12387], 00:09:16.244 | 30.00th=[14222], 40.00th=[15270], 50.00th=[16581], 60.00th=[17433], 00:09:16.244 | 70.00th=[18220], 80.00th=[22938], 90.00th=[29754], 95.00th=[32637], 00:09:16.244 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:09:16.244 | 99.99th=[49546] 00:09:16.244 bw ( KiB/s): min=10400, max=16384, per=21.48%, avg=13392.00, stdev=4231.33, samples=2 00:09:16.244 iops : min= 2600, max= 4096, avg=3348.00, stdev=1057.83, samples=2 00:09:16.244 lat (usec) : 1000=0.09% 00:09:16.244 lat (msec) : 2=0.09%, 10=2.52%, 20=61.79%, 50=35.51% 00:09:16.244 cpu : usr=2.59%, sys=5.58%, ctx=320, majf=0, minf=1 00:09:16.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.244 issued rwts: total=3072,3476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.244 job2: (groupid=0, jobs=1): err= 0: pid=2575280: Thu Nov 7 10:37:43 2024 00:09:16.244 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:09:16.244 slat (nsec): min=1466, max=12140k, avg=126545.29, stdev=797502.79 00:09:16.244 clat (usec): min=4685, max=44654, avg=14399.01, stdev=6726.97 00:09:16.244 lat (usec): min=4691, max=44659, avg=14525.56, stdev=6788.87 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10552], 00:09:16.244 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12518], 60.00th=[12649], 00:09:16.244 | 70.00th=[13042], 80.00th=[14877], 90.00th=[22414], 95.00th=[32113], 00:09:16.244 | 99.00th=[40633], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:09:16.244 | 99.99th=[44827] 00:09:16.244 write: IOPS=3711, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec); 0 zone resets 00:09:16.244 slat (usec): min=2, max=9311, avg=140.69, stdev=612.19 00:09:16.244 clat (usec): min=1369, max=45776, avg=20332.81, stdev=11293.31 00:09:16.244 lat (usec): min=1381, max=45785, avg=20473.50, stdev=11370.03 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 2442], 5.00th=[ 6128], 10.00th=[ 8455], 20.00th=[ 9372], 00:09:16.244 | 30.00th=[11076], 40.00th=[12649], 50.00th=[17171], 60.00th=[24249], 00:09:16.244 | 70.00th=[29230], 80.00th=[32113], 90.00th=[35390], 95.00th=[39060], 00:09:16.244 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:09:16.244 | 99.99th=[45876] 00:09:16.244 bw ( KiB/s): min=14096, max=14768, per=23.15%, avg=14432.00, stdev=475.18, samples=2 00:09:16.244 iops : min= 3524, max= 3692, avg=3608.00, stdev=118.79, samples=2 00:09:16.244 lat (msec) : 2=0.51%, 4=0.68%, 10=17.20%, 20=52.68%, 50=28.93% 00:09:16.244 cpu : usr=2.79%, sys=4.18%, ctx=445, majf=0, minf=1 00:09:16.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.244 issued rwts: total=3584,3730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.244 job3: (groupid=0, jobs=1): err= 0: pid=2575292: Thu Nov 7 10:37:43 2024 00:09:16.244 read: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1005msec) 00:09:16.244 slat (nsec): min=1453, max=8459.1k, avg=109424.34, stdev=585494.35 00:09:16.244 clat (usec): min=2202, max=25155, avg=13682.09, stdev=3186.03 00:09:16.244 lat (usec): min=4276, max=25180, avg=13791.51, stdev=3218.94 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 7242], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11469], 00:09:16.244 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13566], 00:09:16.244 | 70.00th=[15139], 80.00th=[16319], 90.00th=[18744], 95.00th=[20055], 00:09:16.244 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22676], 99.95th=[22676], 00:09:16.244 | 99.99th=[25035] 00:09:16.244 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:16.244 slat (usec): min=2, max=27413, avg=129.47, stdev=738.15 00:09:16.244 clat (usec): min=6262, max=37885, avg=17287.17, stdev=5361.67 00:09:16.244 lat (usec): min=6266, max=37927, avg=17416.64, stdev=5399.34 00:09:16.244 clat percentiles (usec): 00:09:16.244 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:09:16.244 | 30.00th=[12780], 40.00th=[15139], 50.00th=[17433], 60.00th=[19268], 00:09:16.244 | 70.00th=[19792], 80.00th=[21627], 90.00th=[24511], 95.00th=[28443], 00:09:16.244 | 99.00th=[29754], 99.50th=[30016], 99.90th=[31327], 99.95th=[31327], 00:09:16.244 | 99.99th=[38011] 00:09:16.244 bw ( KiB/s): min=14648, max=18120, per=26.28%, avg=16384.00, stdev=2455.07, samples=2 00:09:16.244 iops : min= 3662, max= 4530, avg=4096.00, stdev=613.77, samples=2 00:09:16.244 lat (msec) : 4=0.01%, 10=4.68%, 20=79.18%, 50=16.13% 00:09:16.244 cpu : usr=3.98%, sys=5.18%, ctx=464, majf=0, minf=1 00:09:16.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.244 issued rwts: total=4044,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.244 00:09:16.244 Run status group 0 (all jobs): 00:09:16.244 READ: bw=57.5MiB/s (60.3MB/s), 11.9MiB/s-15.9MiB/s (12.5MB/s-16.7MB/s), io=57.8MiB (60.6MB), run=1005-1005msec 00:09:16.244 WRITE: bw=60.9MiB/s (63.8MB/s), 13.5MiB/s-16.9MiB/s (14.2MB/s-17.8MB/s), io=61.2MiB (64.1MB), run=1005-1005msec 00:09:16.244 00:09:16.244 Disk stats (read/write): 00:09:16.244 nvme0n1: ios=3606/3610, merge=0/0, ticks=22074/29637, in_queue=51711, util=98.90% 00:09:16.244 nvme0n2: ios=2585/3072, merge=0/0, ticks=16106/19704, in_queue=35810, util=97.64% 00:09:16.244 nvme0n3: ios=2586/2911, merge=0/0, ticks=36191/62520, in_queue=98711, util=89.92% 00:09:16.244 nvme0n4: ios=3041/3072, merge=0/0, ticks=22447/30258, in_queue=52705, util=97.35% 00:09:16.244 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:16.244 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2575400 00:09:16.244 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:16.244 10:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:16.244 [global] 00:09:16.244 thread=1 00:09:16.244 invalidate=1 00:09:16.244 rw=read 00:09:16.244 time_based=1 00:09:16.244 runtime=10 00:09:16.244 ioengine=libaio 00:09:16.244 direct=1 00:09:16.244 bs=4096 00:09:16.244 iodepth=1 00:09:16.244 norandommap=1 00:09:16.244 numjobs=1 00:09:16.244 00:09:16.244 [job0] 00:09:16.244 filename=/dev/nvme0n1 00:09:16.244 [job1] 00:09:16.244 filename=/dev/nvme0n2 00:09:16.244 [job2] 00:09:16.244 filename=/dev/nvme0n3 00:09:16.244 [job3] 00:09:16.244 filename=/dev/nvme0n4 00:09:16.244 Could not set queue depth (nvme0n1) 00:09:16.244 Could not set queue depth (nvme0n2) 00:09:16.244 Could not set queue depth (nvme0n3) 00:09:16.244 Could not set queue depth (nvme0n4) 00:09:16.502 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.502 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.502 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.502 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.502 fio-3.35 00:09:16.502 Starting 4 threads 00:09:19.785 10:37:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:19.785 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=48218112, buflen=4096 00:09:19.785 fio: pid=2575696, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.785 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:19.785 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1617920, buflen=4096 00:09:19.785 fio: pid=2575695, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:19.785 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.785 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:20.042 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44900352, buflen=4096 00:09:20.042 fio: pid=2575693, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:20.042 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.042 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:20.042 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=47468544, buflen=4096 00:09:20.042 fio: pid=2575694, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:20.042 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.042 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:20.300 00:09:20.300 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2575693: Thu Nov 7 10:37:47 2024 00:09:20.300 read: IOPS=3455, BW=13.5MiB/s (14.1MB/s)(42.8MiB/3173msec) 00:09:20.300 slat (usec): min=4, max=14320, avg= 8.77, stdev=136.71 00:09:20.300 clat (usec): min=174, max=41975, avg=277.44, stdev=561.29 00:09:20.300 lat (usec): min=180, max=41989, avg=286.21, stdev=577.83 00:09:20.300 clat percentiles (usec): 00:09:20.300 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 233], 00:09:20.300 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:09:20.300 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 424], 00:09:20.300 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 515], 99.95th=[ 529], 00:09:20.300 | 99.99th=[41157] 00:09:20.300 bw ( KiB/s): min=13592, max=14158, per=33.93%, avg=13869.00, stdev=217.25, samples=6 00:09:20.300 iops : min= 3398, max= 3539, avg=3467.17, stdev=54.18, samples=6 00:09:20.300 lat (usec) : 250=39.98%, 500=59.64%, 750=0.36% 00:09:20.300 lat (msec) : 50=0.02% 00:09:20.300 cpu : usr=0.95%, sys=3.06%, ctx=10967, majf=0, minf=1 00:09:20.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 issued rwts: total=10963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.301 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2575694: Thu Nov 7 10:37:47 2024 00:09:20.301 read: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(45.3MiB/3397msec) 00:09:20.301 slat (usec): min=5, max=15617, avg=11.67, stdev=240.74 00:09:20.301 clat (usec): min=164, max=2442, avg=278.27, stdev=80.49 00:09:20.301 lat (usec): min=172, max=16008, avg=289.94, stdev=256.25 00:09:20.301 clat percentiles (usec): 00:09:20.301 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 225], 00:09:20.301 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:09:20.301 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 420], 95.00th=[ 474], 00:09:20.301 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 529], 99.95th=[ 570], 00:09:20.301 | 99.99th=[ 1647] 00:09:20.301 bw ( KiB/s): min=11712, max=15933, per=32.72%, avg=13375.50, stdev=1419.99, samples=6 00:09:20.301 iops : min= 2928, max= 3983, avg=3343.83, stdev=354.91, samples=6 00:09:20.301 lat (usec) : 250=38.92%, 500=59.82%, 750=1.23% 00:09:20.301 lat (msec) : 2=0.01%, 4=0.01% 00:09:20.301 cpu : usr=0.77%, sys=3.15%, ctx=11596, majf=0, minf=2 00:09:20.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 issued rwts: total=11590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.301 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2575695: Thu Nov 7 10:37:47 2024 00:09:20.301 read: IOPS=134, BW=537KiB/s (550kB/s)(1580KiB/2941msec) 00:09:20.301 slat (nsec): min=7107, max=65859, avg=9785.24, stdev=4481.49 00:09:20.301 clat (usec): min=206, max=42236, avg=7378.72, stdev=15521.87 00:09:20.301 lat (usec): min=215, max=42244, avg=7388.47, stdev=15524.40 00:09:20.301 clat percentiles (usec): 00:09:20.301 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:09:20.301 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:09:20.301 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[41157], 95.00th=[41157], 00:09:20.301 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:20.301 | 99.99th=[42206] 00:09:20.301 bw ( KiB/s): min= 96, max= 104, per=0.24%, avg=99.20, stdev= 4.38, samples=5 00:09:20.301 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:09:20.301 lat (usec) : 250=52.53%, 500=29.55%, 750=0.25% 00:09:20.301 lat (msec) : 50=17.42% 00:09:20.301 cpu : usr=0.03%, sys=0.27%, ctx=397, majf=0, minf=2 00:09:20.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.301 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2575696: Thu Nov 7 10:37:47 2024 00:09:20.301 read: IOPS=4345, BW=17.0MiB/s (17.8MB/s)(46.0MiB/2709msec) 00:09:20.301 slat (nsec): min=6289, max=50804, avg=7457.30, stdev=1330.47 00:09:20.301 clat (usec): min=178, max=914, avg=220.93, stdev=15.66 00:09:20.301 lat (usec): min=185, max=921, avg=228.39, stdev=15.89 00:09:20.301 clat percentiles (usec): 00:09:20.301 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:20.301 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:09:20.301 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 239], 95.00th=[ 245], 00:09:20.301 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 314], 00:09:20.301 | 99.99th=[ 441] 00:09:20.301 bw ( KiB/s): min=17176, max=17736, per=42.72%, avg=17464.00, stdev=201.36, samples=5 00:09:20.301 iops : min= 4294, max= 4434, avg=4366.00, stdev=50.34, samples=5 00:09:20.301 lat (usec) : 250=97.30%, 500=2.68%, 1000=0.01% 00:09:20.301 cpu : usr=0.74%, sys=4.25%, ctx=11775, majf=0, minf=2 00:09:20.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.301 issued rwts: total=11773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.301 00:09:20.301 Run status group 0 (all jobs): 00:09:20.301 READ: bw=39.9MiB/s (41.9MB/s), 537KiB/s-17.0MiB/s (550kB/s-17.8MB/s), io=136MiB (142MB), run=2709-3397msec 00:09:20.301 00:09:20.301 Disk stats (read/write): 00:09:20.301 nvme0n1: ios=10806/0, merge=0/0, ticks=3848/0, in_queue=3848, util=99.04% 00:09:20.301 nvme0n2: ios=11539/0, merge=0/0, ticks=4032/0, in_queue=4032, util=98.22% 00:09:20.301 nvme0n3: ios=393/0, merge=0/0, ticks=2819/0, in_queue=2819, util=96.49% 00:09:20.301 nvme0n4: ios=11366/0, merge=0/0, ticks=2441/0, in_queue=2441, util=96.41% 00:09:20.301 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.301 10:37:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:20.558 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.558 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:20.816 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:20.816 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:21.074 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.074 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2575400 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:21.332 nvmf hotplug test: fio failed as expected 00:09:21.332 10:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.590 rmmod nvme_tcp 00:09:21.590 rmmod nvme_fabrics 00:09:21.590 rmmod nvme_keyring 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2572622 ']' 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2572622 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2572622 ']' 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2572622 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2572622 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2572622' 00:09:21.590 killing process with pid 2572622 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2572622 00:09:21.590 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2572622 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.848 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.849 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.849 10:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.381 00:09:24.381 real 0m26.838s 00:09:24.381 user 1m47.107s 00:09:24.381 sys 0m8.746s 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 ************************************ 00:09:24.381 END TEST nvmf_fio_target 00:09:24.381 ************************************ 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.381 ************************************ 00:09:24.381 START TEST nvmf_bdevio 00:09:24.381 ************************************ 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:24.381 * Looking for test storage... 00:09:24.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.381 --rc genhtml_branch_coverage=1 00:09:24.381 --rc genhtml_function_coverage=1 00:09:24.381 --rc genhtml_legend=1 00:09:24.381 --rc geninfo_all_blocks=1 00:09:24.381 --rc geninfo_unexecuted_blocks=1 00:09:24.381 00:09:24.381 ' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.381 --rc genhtml_branch_coverage=1 00:09:24.381 --rc genhtml_function_coverage=1 00:09:24.381 --rc genhtml_legend=1 00:09:24.381 --rc geninfo_all_blocks=1 00:09:24.381 --rc geninfo_unexecuted_blocks=1 00:09:24.381 00:09:24.381 ' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.381 --rc genhtml_branch_coverage=1 00:09:24.381 --rc genhtml_function_coverage=1 00:09:24.381 --rc genhtml_legend=1 00:09:24.381 --rc geninfo_all_blocks=1 00:09:24.381 --rc geninfo_unexecuted_blocks=1 00:09:24.381 00:09:24.381 ' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.381 --rc genhtml_branch_coverage=1 00:09:24.381 --rc genhtml_function_coverage=1 00:09:24.381 --rc genhtml_legend=1 00:09:24.381 --rc geninfo_all_blocks=1 00:09:24.381 --rc geninfo_unexecuted_blocks=1 00:09:24.381 00:09:24.381 ' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.381 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.382 10:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.645 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:29.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:29.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:29.646 Found net devices under 0000:86:00.0: cvl_0_0 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:29.646 Found net devices under 0000:86:00.1: cvl_0_1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:09:29.646 00:09:29.646 --- 10.0.0.2 ping statistics --- 00:09:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.646 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:09:29.646 00:09:29.646 --- 10.0.0.1 ping statistics --- 00:09:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.646 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.646 10:37:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2579936 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2579936 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2579936 ']' 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.647 [2024-11-07 10:37:57.065129] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:29.647 [2024-11-07 10:37:57.065177] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.647 [2024-11-07 10:37:57.132071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.647 [2024-11-07 10:37:57.174284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.647 [2024-11-07 10:37:57.174324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.647 [2024-11-07 10:37:57.174331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.647 [2024-11-07 10:37:57.174338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.647 [2024-11-07 10:37:57.174343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.647 [2024-11-07 10:37:57.175825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.647 [2024-11-07 10:37:57.175930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:29.647 [2024-11-07 10:37:57.176036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.647 [2024-11-07 10:37:57.176037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.647 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.647 [2024-11-07 10:37:57.312251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 Malloc0 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:29.904 [2024-11-07 10:37:57.369961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.904 { 00:09:29.904 "params": { 00:09:29.904 "name": "Nvme$subsystem", 00:09:29.904 "trtype": "$TEST_TRANSPORT", 00:09:29.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.904 "adrfam": "ipv4", 00:09:29.904 "trsvcid": "$NVMF_PORT", 00:09:29.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.904 "hdgst": ${hdgst:-false}, 00:09:29.904 "ddgst": ${ddgst:-false} 00:09:29.904 }, 00:09:29.904 "method": "bdev_nvme_attach_controller" 00:09:29.904 } 00:09:29.904 EOF 00:09:29.904 )") 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:29.904 10:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.904 "params": { 00:09:29.904 "name": "Nvme1", 00:09:29.904 "trtype": "tcp", 00:09:29.904 "traddr": "10.0.0.2", 00:09:29.904 "adrfam": "ipv4", 00:09:29.904 "trsvcid": "4420", 00:09:29.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.904 "hdgst": false, 00:09:29.904 "ddgst": false 00:09:29.904 }, 00:09:29.904 "method": "bdev_nvme_attach_controller" 00:09:29.904 }' 00:09:29.904 [2024-11-07 10:37:57.422502] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:29.904 [2024-11-07 10:37:57.422545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2579966 ] 00:09:29.904 [2024-11-07 10:37:57.488043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.904 [2024-11-07 10:37:57.532457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.904 [2024-11-07 10:37:57.532509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.904 [2024-11-07 10:37:57.532513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.161 I/O targets: 00:09:30.161 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:30.161 00:09:30.161 00:09:30.161 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.161 http://cunit.sourceforge.net/ 00:09:30.161 00:09:30.161 00:09:30.161 Suite: bdevio tests on: Nvme1n1 00:09:30.161 Test: blockdev write read block ...passed 00:09:30.161 Test: blockdev write zeroes read block ...passed 00:09:30.161 Test: blockdev write zeroes read no split ...passed 00:09:30.161 Test: blockdev write zeroes read split ...passed 00:09:30.161 Test: blockdev write zeroes read split partial ...passed 00:09:30.161 Test: blockdev reset ...[2024-11-07 10:37:57.803560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:30.161 [2024-11-07 10:37:57.803627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae5350 (9): Bad file descriptor 00:09:30.161 [2024-11-07 10:37:57.821520] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:30.161 passed 00:09:30.417 Test: blockdev write read 8 blocks ...passed 00:09:30.417 Test: blockdev write read size > 128k ...passed 00:09:30.417 Test: blockdev write read invalid size ...passed 00:09:30.417 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:30.417 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:30.417 Test: blockdev write read max offset ...passed 00:09:30.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:30.417 Test: blockdev writev readv 8 blocks ...passed 00:09:30.417 Test: blockdev writev readv 30 x 1block ...passed 00:09:30.673 Test: blockdev writev readv block ...passed 00:09:30.673 Test: blockdev writev readv size > 128k ...passed 00:09:30.673 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:30.673 Test: blockdev comparev and writev ...[2024-11-07 10:37:58.118096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.673 [2024-11-07 10:37:58.118128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:30.673 [2024-11-07 10:37:58.118143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.673 [2024-11-07 10:37:58.118151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:30.673 [2024-11-07 10:37:58.118398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.673 [2024-11-07 10:37:58.118408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:30.673 [2024-11-07 10:37:58.118419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.674 [2024-11-07 10:37:58.118427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.118668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.674 [2024-11-07 10:37:58.118678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.118690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.674 [2024-11-07 10:37:58.118698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.118940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.674 [2024-11-07 10:37:58.118950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.118961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:30.674 [2024-11-07 10:37:58.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:30.674 passed 00:09:30.674 Test: blockdev nvme passthru rw ...passed 00:09:30.674 Test: blockdev nvme passthru vendor specific ...[2024-11-07 10:37:58.200837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.674 [2024-11-07 10:37:58.200857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.200975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.674 [2024-11-07 10:37:58.200984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.201093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.674 [2024-11-07 10:37:58.201103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:30.674 [2024-11-07 10:37:58.201208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:30.674 [2024-11-07 10:37:58.201217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:30.674 passed 00:09:30.674 Test: blockdev nvme admin passthru ...passed 00:09:30.674 Test: blockdev copy ...passed 00:09:30.674 00:09:30.674 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.674 suites 1 1 n/a 0 0 00:09:30.674 tests 23 23 23 0 0 00:09:30.674 asserts 152 152 152 0 n/a 00:09:30.674 00:09:30.674 Elapsed time = 1.128 seconds 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.931 rmmod nvme_tcp 00:09:30.931 rmmod nvme_fabrics 00:09:30.931 rmmod nvme_keyring 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2579936 ']' 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2579936 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2579936 ']' 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2579936 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2579936 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2579936' 00:09:30.931 killing process with pid 2579936 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2579936 00:09:30.931 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2579936 00:09:31.188 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:31.188 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.189 10:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.087 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:33.346 00:09:33.346 real 0m9.209s 00:09:33.346 user 0m9.357s 00:09:33.346 sys 0m4.428s 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.346 ************************************ 00:09:33.346 END TEST nvmf_bdevio 00:09:33.346 ************************************ 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:33.346 00:09:33.346 real 4m27.432s 00:09:33.346 user 10m21.079s 00:09:33.346 sys 1m33.761s 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.346 ************************************ 00:09:33.346 END TEST nvmf_target_core 00:09:33.346 ************************************ 00:09:33.346 10:38:00 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.346 10:38:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.346 10:38:00 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.346 10:38:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.346 ************************************ 00:09:33.346 START TEST nvmf_target_extra 00:09:33.346 ************************************ 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:33.346 * Looking for test storage... 00:09:33.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.346 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.346 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.606 --rc genhtml_branch_coverage=1 00:09:33.606 --rc genhtml_function_coverage=1 00:09:33.606 --rc genhtml_legend=1 00:09:33.606 --rc geninfo_all_blocks=1 00:09:33.606 --rc geninfo_unexecuted_blocks=1 00:09:33.606 00:09:33.606 ' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.606 --rc genhtml_branch_coverage=1 00:09:33.606 --rc genhtml_function_coverage=1 00:09:33.606 --rc genhtml_legend=1 00:09:33.606 --rc geninfo_all_blocks=1 00:09:33.606 --rc geninfo_unexecuted_blocks=1 00:09:33.606 00:09:33.606 ' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.606 --rc genhtml_branch_coverage=1 00:09:33.606 --rc genhtml_function_coverage=1 00:09:33.606 --rc genhtml_legend=1 00:09:33.606 --rc geninfo_all_blocks=1 00:09:33.606 --rc geninfo_unexecuted_blocks=1 00:09:33.606 00:09:33.606 ' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.606 --rc genhtml_branch_coverage=1 00:09:33.606 --rc genhtml_function_coverage=1 00:09:33.606 --rc genhtml_legend=1 00:09:33.606 --rc geninfo_all_blocks=1 00:09:33.606 --rc geninfo_unexecuted_blocks=1 00:09:33.606 00:09:33.606 ' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.606 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:33.607 ************************************ 00:09:33.607 START TEST nvmf_example 00:09:33.607 ************************************ 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:33.607 * Looking for test storage... 00:09:33.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.607 --rc genhtml_branch_coverage=1 00:09:33.607 --rc genhtml_function_coverage=1 00:09:33.607 --rc genhtml_legend=1 00:09:33.607 --rc geninfo_all_blocks=1 00:09:33.607 --rc geninfo_unexecuted_blocks=1 00:09:33.607 00:09:33.607 ' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.607 --rc genhtml_branch_coverage=1 00:09:33.607 --rc genhtml_function_coverage=1 00:09:33.607 --rc genhtml_legend=1 00:09:33.607 --rc geninfo_all_blocks=1 00:09:33.607 --rc geninfo_unexecuted_blocks=1 00:09:33.607 00:09:33.607 ' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.607 --rc genhtml_branch_coverage=1 00:09:33.607 --rc genhtml_function_coverage=1 00:09:33.607 --rc genhtml_legend=1 00:09:33.607 --rc geninfo_all_blocks=1 00:09:33.607 --rc geninfo_unexecuted_blocks=1 00:09:33.607 00:09:33.607 ' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:33.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.607 --rc genhtml_branch_coverage=1 00:09:33.607 --rc genhtml_function_coverage=1 00:09:33.607 --rc genhtml_legend=1 00:09:33.607 --rc geninfo_all_blocks=1 00:09:33.607 --rc geninfo_unexecuted_blocks=1 00:09:33.607 00:09:33.607 ' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.607 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:33.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.608 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:33.866 10:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:39.146 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:39.146 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:39.146 Found net devices under 0000:86:00.0: cvl_0_0 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:39.146 Found net devices under 0000:86:00.1: cvl_0_1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:39.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:09:39.146 00:09:39.146 --- 10.0.0.2 ping statistics --- 00:09:39.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.146 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:09:39.146 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:39.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:39.147 00:09:39.147 --- 10.0.0.1 ping statistics --- 00:09:39.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.147 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2583768 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2583768 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2583768 ']' 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:39.147 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.709 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:39.710 10:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:51.894 Initializing NVMe Controllers 00:09:51.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:51.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:51.894 Initialization complete. Launching workers. 00:09:51.894 ======================================================== 00:09:51.894 Latency(us) 00:09:51.894 Device Information : IOPS MiB/s Average min max 00:09:51.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18282.42 71.42 3501.25 613.10 16189.29 00:09:51.894 ======================================================== 00:09:51.894 Total : 18282.42 71.42 3501.25 613.10 16189.29 00:09:51.894 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.894 rmmod nvme_tcp 00:09:51.894 rmmod nvme_fabrics 00:09:51.894 rmmod nvme_keyring 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2583768 ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2583768 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2583768 ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2583768 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2583768 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2583768' 00:09:51.894 killing process with pid 2583768 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2583768 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2583768 00:09:51.894 nvmf threads initialize successfully 00:09:51.894 bdev subsystem init successfully 00:09:51.894 created a nvmf target service 00:09:51.894 create targets's poll groups done 00:09:51.894 all subsystems of target started 00:09:51.894 nvmf target is running 00:09:51.894 all subsystems of target stopped 00:09:51.894 destroy targets's poll groups done 00:09:51.894 destroyed the nvmf target service 00:09:51.894 bdev subsystem finish successfully 00:09:51.894 nvmf threads destroy successfully 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.894 10:38:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 00:09:52.469 real 0m18.871s 00:09:52.469 user 0m45.742s 00:09:52.469 sys 0m5.359s 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 ************************************ 00:09:52.469 END TEST nvmf_example 00:09:52.469 ************************************ 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.469 10:38:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:52.469 ************************************ 00:09:52.469 START TEST nvmf_filesystem 00:09:52.469 ************************************ 00:09:52.469 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:52.469 * Looking for test storage... 00:09:52.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.469 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.469 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.469 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.729 --rc genhtml_branch_coverage=1 00:09:52.729 --rc genhtml_function_coverage=1 00:09:52.729 --rc genhtml_legend=1 00:09:52.729 --rc geninfo_all_blocks=1 00:09:52.729 --rc geninfo_unexecuted_blocks=1 00:09:52.729 00:09:52.729 ' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.729 --rc genhtml_branch_coverage=1 00:09:52.729 --rc genhtml_function_coverage=1 00:09:52.729 --rc genhtml_legend=1 00:09:52.729 --rc geninfo_all_blocks=1 00:09:52.729 --rc geninfo_unexecuted_blocks=1 00:09:52.729 00:09:52.729 ' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.729 --rc genhtml_branch_coverage=1 00:09:52.729 --rc genhtml_function_coverage=1 00:09:52.729 --rc genhtml_legend=1 00:09:52.729 --rc geninfo_all_blocks=1 00:09:52.729 --rc geninfo_unexecuted_blocks=1 00:09:52.729 00:09:52.729 ' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.729 --rc genhtml_branch_coverage=1 00:09:52.729 --rc genhtml_function_coverage=1 00:09:52.729 --rc genhtml_legend=1 00:09:52.729 --rc geninfo_all_blocks=1 00:09:52.729 --rc geninfo_unexecuted_blocks=1 00:09:52.729 00:09:52.729 ' 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:52.729 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:52.730 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:52.731 #define SPDK_CONFIG_H 00:09:52.731 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:52.731 #define SPDK_CONFIG_APPS 1 00:09:52.731 #define SPDK_CONFIG_ARCH native 00:09:52.731 #undef SPDK_CONFIG_ASAN 00:09:52.731 #undef SPDK_CONFIG_AVAHI 00:09:52.731 #undef SPDK_CONFIG_CET 00:09:52.731 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:52.731 #define SPDK_CONFIG_COVERAGE 1 00:09:52.731 #define SPDK_CONFIG_CROSS_PREFIX 00:09:52.731 #undef SPDK_CONFIG_CRYPTO 00:09:52.731 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:52.731 #undef SPDK_CONFIG_CUSTOMOCF 00:09:52.731 #undef SPDK_CONFIG_DAOS 00:09:52.731 #define SPDK_CONFIG_DAOS_DIR 00:09:52.731 #define SPDK_CONFIG_DEBUG 1 00:09:52.731 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:52.731 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:52.731 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:52.731 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:52.731 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:52.731 #undef SPDK_CONFIG_DPDK_UADK 00:09:52.731 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:52.731 #define SPDK_CONFIG_EXAMPLES 1 00:09:52.731 #undef SPDK_CONFIG_FC 00:09:52.731 #define SPDK_CONFIG_FC_PATH 00:09:52.731 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:52.731 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:52.731 #define SPDK_CONFIG_FSDEV 1 00:09:52.731 #undef SPDK_CONFIG_FUSE 00:09:52.731 #undef SPDK_CONFIG_FUZZER 00:09:52.731 #define SPDK_CONFIG_FUZZER_LIB 00:09:52.731 #undef SPDK_CONFIG_GOLANG 00:09:52.731 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:52.731 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:52.731 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:52.731 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:52.731 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:52.731 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:52.731 #undef SPDK_CONFIG_HAVE_LZ4 00:09:52.731 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:52.731 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:52.731 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:52.731 #define SPDK_CONFIG_IDXD 1 00:09:52.731 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:52.731 #undef SPDK_CONFIG_IPSEC_MB 00:09:52.731 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:52.731 #define SPDK_CONFIG_ISAL 1 00:09:52.731 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:52.731 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:52.731 #define SPDK_CONFIG_LIBDIR 00:09:52.731 #undef SPDK_CONFIG_LTO 00:09:52.731 #define SPDK_CONFIG_MAX_LCORES 128 00:09:52.731 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:52.731 #define SPDK_CONFIG_NVME_CUSE 1 00:09:52.731 #undef SPDK_CONFIG_OCF 00:09:52.731 #define SPDK_CONFIG_OCF_PATH 00:09:52.731 #define SPDK_CONFIG_OPENSSL_PATH 00:09:52.731 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:52.731 #define SPDK_CONFIG_PGO_DIR 00:09:52.731 #undef SPDK_CONFIG_PGO_USE 00:09:52.731 #define SPDK_CONFIG_PREFIX /usr/local 00:09:52.731 #undef SPDK_CONFIG_RAID5F 00:09:52.731 #undef SPDK_CONFIG_RBD 00:09:52.731 #define SPDK_CONFIG_RDMA 1 00:09:52.731 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:52.731 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:52.731 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:52.731 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:52.731 #define SPDK_CONFIG_SHARED 1 00:09:52.731 #undef SPDK_CONFIG_SMA 00:09:52.731 #define SPDK_CONFIG_TESTS 1 00:09:52.731 #undef SPDK_CONFIG_TSAN 00:09:52.731 #define SPDK_CONFIG_UBLK 1 00:09:52.731 #define SPDK_CONFIG_UBSAN 1 00:09:52.731 #undef SPDK_CONFIG_UNIT_TESTS 00:09:52.731 #undef SPDK_CONFIG_URING 00:09:52.731 #define SPDK_CONFIG_URING_PATH 00:09:52.731 #undef SPDK_CONFIG_URING_ZNS 00:09:52.731 #undef SPDK_CONFIG_USDT 00:09:52.731 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:52.731 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:52.731 #define SPDK_CONFIG_VFIO_USER 1 00:09:52.731 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:52.731 #define SPDK_CONFIG_VHOST 1 00:09:52.731 #define SPDK_CONFIG_VIRTIO 1 00:09:52.731 #undef SPDK_CONFIG_VTUNE 00:09:52.731 #define SPDK_CONFIG_VTUNE_DIR 00:09:52.731 #define SPDK_CONFIG_WERROR 1 00:09:52.731 #define SPDK_CONFIG_WPDK_DIR 00:09:52.731 #undef SPDK_CONFIG_XNVME 00:09:52.731 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:52.731 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:52.732 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:52.733 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2586129 ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2586129 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.9F25pK 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.9F25pK/tests/target /tmp/spdk.9F25pK 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189046493184 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963981824 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6917488640 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971957760 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981988864 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169753088 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192797184 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97980973056 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981992960 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1019904 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:52.734 * Looking for test storage... 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189046493184 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9132081152 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.734 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.735 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.994 --rc genhtml_branch_coverage=1 00:09:52.994 --rc genhtml_function_coverage=1 00:09:52.994 --rc genhtml_legend=1 00:09:52.994 --rc geninfo_all_blocks=1 00:09:52.994 --rc geninfo_unexecuted_blocks=1 00:09:52.994 00:09:52.994 ' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.994 --rc genhtml_branch_coverage=1 00:09:52.994 --rc genhtml_function_coverage=1 00:09:52.994 --rc genhtml_legend=1 00:09:52.994 --rc geninfo_all_blocks=1 00:09:52.994 --rc geninfo_unexecuted_blocks=1 00:09:52.994 00:09:52.994 ' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.994 --rc genhtml_branch_coverage=1 00:09:52.994 --rc genhtml_function_coverage=1 00:09:52.994 --rc genhtml_legend=1 00:09:52.994 --rc geninfo_all_blocks=1 00:09:52.994 --rc geninfo_unexecuted_blocks=1 00:09:52.994 00:09:52.994 ' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.994 --rc genhtml_branch_coverage=1 00:09:52.994 --rc genhtml_function_coverage=1 00:09:52.994 --rc genhtml_legend=1 00:09:52.994 --rc geninfo_all_blocks=1 00:09:52.994 --rc geninfo_unexecuted_blocks=1 00:09:52.994 00:09:52.994 ' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.994 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.995 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:59.572 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:59.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.572 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:59.573 Found net devices under 0000:86:00.0: cvl_0_0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:59.573 Found net devices under 0000:86:00.1: cvl_0_1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:09:59.573 00:09:59.573 --- 10.0.0.2 ping statistics --- 00:09:59.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.573 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:09:59.573 00:09:59.573 --- 10.0.0.1 ping statistics --- 00:09:59.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.573 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.573 ************************************ 00:09:59.573 START TEST nvmf_filesystem_no_in_capsule 00:09:59.573 ************************************ 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2589215 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2589215 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2589215 ']' 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.573 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.573 [2024-11-07 10:38:26.521996] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:09:59.573 [2024-11-07 10:38:26.522042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.573 [2024-11-07 10:38:26.589206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.573 [2024-11-07 10:38:26.632746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.573 [2024-11-07 10:38:26.632780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.573 [2024-11-07 10:38:26.632788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.573 [2024-11-07 10:38:26.632794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.573 [2024-11-07 10:38:26.632799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.573 [2024-11-07 10:38:26.634278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.574 [2024-11-07 10:38:26.634376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.574 [2024-11-07 10:38:26.634395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.574 [2024-11-07 10:38:26.634397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 [2024-11-07 10:38:26.783761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 [2024-11-07 10:38:26.928290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:09:59.574 { 00:09:59.574 "name": "Malloc1", 00:09:59.574 "aliases": [ 00:09:59.574 "b3425175-5d30-4627-83a4-8b727150ff6c" 00:09:59.574 ], 00:09:59.574 "product_name": "Malloc disk", 00:09:59.574 "block_size": 512, 00:09:59.574 "num_blocks": 1048576, 00:09:59.574 "uuid": "b3425175-5d30-4627-83a4-8b727150ff6c", 00:09:59.574 "assigned_rate_limits": { 00:09:59.574 "rw_ios_per_sec": 0, 00:09:59.574 "rw_mbytes_per_sec": 0, 00:09:59.574 "r_mbytes_per_sec": 0, 00:09:59.574 "w_mbytes_per_sec": 0 00:09:59.574 }, 00:09:59.574 "claimed": true, 00:09:59.574 "claim_type": "exclusive_write", 00:09:59.574 "zoned": false, 00:09:59.574 "supported_io_types": { 00:09:59.574 "read": true, 00:09:59.574 "write": true, 00:09:59.574 "unmap": true, 00:09:59.574 "flush": true, 00:09:59.574 "reset": true, 00:09:59.574 "nvme_admin": false, 00:09:59.574 "nvme_io": false, 00:09:59.574 "nvme_io_md": false, 00:09:59.574 "write_zeroes": true, 00:09:59.574 "zcopy": true, 00:09:59.574 "get_zone_info": false, 00:09:59.574 "zone_management": false, 00:09:59.574 "zone_append": false, 00:09:59.574 "compare": false, 00:09:59.574 "compare_and_write": false, 00:09:59.574 "abort": true, 00:09:59.574 "seek_hole": false, 00:09:59.574 "seek_data": false, 00:09:59.574 "copy": true, 00:09:59.574 "nvme_iov_md": false 00:09:59.574 }, 00:09:59.574 "memory_domains": [ 00:09:59.574 { 00:09:59.574 "dma_device_id": "system", 00:09:59.574 "dma_device_type": 1 00:09:59.574 }, 00:09:59.574 { 00:09:59.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.574 "dma_device_type": 2 00:09:59.574 } 00:09:59.574 ], 00:09:59.574 "driver_specific": {} 00:09:59.574 } 00:09:59.574 ]' 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:09:59.574 10:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:59.574 10:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.508 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.508 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:00.508 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.508 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:00.508 10:38:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:03.033 10:38:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:03.597 10:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.528 ************************************ 00:10:04.528 START TEST filesystem_ext4 00:10:04.528 ************************************ 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:04.528 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:04.528 mke2fs 1.47.0 (5-Feb-2023) 00:10:04.528 Discarding device blocks: 0/522240 done 00:10:04.528 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:04.528 Filesystem UUID: e18f21fe-e8ee-45fa-a760-dd0152a276b7 00:10:04.528 Superblock backups stored on blocks: 00:10:04.528 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:04.528 00:10:04.528 Allocating group tables: 0/64 done 00:10:04.528 Writing inode tables: 0/64 done 00:10:05.093 Creating journal (8192 blocks): done 00:10:05.093 Writing superblocks and filesystem accounting information: 0/64 done 00:10:05.093 00:10:05.093 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:05.093 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:11.644 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2589215 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.645 00:10:11.645 real 0m6.773s 00:10:11.645 user 0m0.035s 00:10:11.645 sys 0m0.064s 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:11.645 ************************************ 00:10:11.645 END TEST filesystem_ext4 00:10:11.645 ************************************ 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.645 ************************************ 00:10:11.645 START TEST filesystem_btrfs 00:10:11.645 ************************************ 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:11.645 10:38:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:11.903 btrfs-progs v6.8.1 00:10:11.903 See https://btrfs.readthedocs.io for more information. 00:10:11.903 00:10:11.903 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:11.903 NOTE: several default settings have changed in version 5.15, please make sure 00:10:11.903 this does not affect your deployments: 00:10:11.903 - DUP for metadata (-m dup) 00:10:11.903 - enabled no-holes (-O no-holes) 00:10:11.903 - enabled free-space-tree (-R free-space-tree) 00:10:11.903 00:10:11.903 Label: (null) 00:10:11.903 UUID: e7262a40-911d-468a-896d-019d23068122 00:10:11.903 Node size: 16384 00:10:11.903 Sector size: 4096 (CPU page size: 4096) 00:10:11.903 Filesystem size: 510.00MiB 00:10:11.903 Block group profiles: 00:10:11.903 Data: single 8.00MiB 00:10:11.903 Metadata: DUP 32.00MiB 00:10:11.903 System: DUP 8.00MiB 00:10:11.903 SSD detected: yes 00:10:11.903 Zoned device: no 00:10:11.903 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:11.903 Checksum: crc32c 00:10:11.903 Number of devices: 1 00:10:11.903 Devices: 00:10:11.903 ID SIZE PATH 00:10:11.903 1 510.00MiB /dev/nvme0n1p1 00:10:11.903 00:10:11.903 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:11.903 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2589215 00:10:12.161 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.162 00:10:12.162 real 0m0.739s 00:10:12.162 user 0m0.031s 00:10:12.162 sys 0m0.109s 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.162 ************************************ 00:10:12.162 END TEST filesystem_btrfs 00:10:12.162 ************************************ 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.162 ************************************ 00:10:12.162 START TEST filesystem_xfs 00:10:12.162 ************************************ 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:12.162 10:38:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:12.420 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:12.420 = sectsz=512 attr=2, projid32bit=1 00:10:12.420 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:12.420 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:12.420 data = bsize=4096 blocks=130560, imaxpct=25 00:10:12.420 = sunit=0 swidth=0 blks 00:10:12.420 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:12.420 log =internal log bsize=4096 blocks=16384, version=2 00:10:12.420 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:12.420 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:13.406 Discarding blocks...Done. 00:10:13.406 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:13.406 10:38:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2589215 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.333 00:10:15.333 real 0m2.962s 00:10:15.333 user 0m0.023s 00:10:15.333 sys 0m0.077s 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.333 ************************************ 00:10:15.333 END TEST filesystem_xfs 00:10:15.333 ************************************ 00:10:15.333 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2589215 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2589215 ']' 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2589215 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2589215 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2589215' 00:10:15.596 killing process with pid 2589215 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2589215 00:10:15.596 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2589215 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:16.163 00:10:16.163 real 0m17.090s 00:10:16.163 user 1m7.279s 00:10:16.163 sys 0m1.392s 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.163 ************************************ 00:10:16.163 END TEST nvmf_filesystem_no_in_capsule 00:10:16.163 ************************************ 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:16.163 ************************************ 00:10:16.163 START TEST nvmf_filesystem_in_capsule 00:10:16.163 ************************************ 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2592311 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2592311 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2592311 ']' 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.163 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.163 [2024-11-07 10:38:43.679097] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:16.163 [2024-11-07 10:38:43.679140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.163 [2024-11-07 10:38:43.745501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.163 [2024-11-07 10:38:43.788568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.163 [2024-11-07 10:38:43.788604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.163 [2024-11-07 10:38:43.788611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.163 [2024-11-07 10:38:43.788617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.163 [2024-11-07 10:38:43.788622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.163 [2024-11-07 10:38:43.790111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.163 [2024-11-07 10:38:43.790209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.163 [2024-11-07 10:38:43.790270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.163 [2024-11-07 10:38:43.790272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.421 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.421 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:16.421 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.421 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.421 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 [2024-11-07 10:38:43.927861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 Malloc1 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.422 [2024-11-07 10:38:44.071256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.422 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:16.680 { 00:10:16.680 "name": "Malloc1", 00:10:16.680 "aliases": [ 00:10:16.680 "70205c6c-abd5-4c32-86ae-b1df0da791b5" 00:10:16.680 ], 00:10:16.680 "product_name": "Malloc disk", 00:10:16.680 "block_size": 512, 00:10:16.680 "num_blocks": 1048576, 00:10:16.680 "uuid": "70205c6c-abd5-4c32-86ae-b1df0da791b5", 00:10:16.680 "assigned_rate_limits": { 00:10:16.680 "rw_ios_per_sec": 0, 00:10:16.680 "rw_mbytes_per_sec": 0, 00:10:16.680 "r_mbytes_per_sec": 0, 00:10:16.680 "w_mbytes_per_sec": 0 00:10:16.680 }, 00:10:16.680 "claimed": true, 00:10:16.680 "claim_type": "exclusive_write", 00:10:16.680 "zoned": false, 00:10:16.680 "supported_io_types": { 00:10:16.680 "read": true, 00:10:16.680 "write": true, 00:10:16.680 "unmap": true, 00:10:16.680 "flush": true, 00:10:16.680 "reset": true, 00:10:16.680 "nvme_admin": false, 00:10:16.680 "nvme_io": false, 00:10:16.680 "nvme_io_md": false, 00:10:16.680 "write_zeroes": true, 00:10:16.680 "zcopy": true, 00:10:16.680 "get_zone_info": false, 00:10:16.680 "zone_management": false, 00:10:16.680 "zone_append": false, 00:10:16.680 "compare": false, 00:10:16.680 "compare_and_write": false, 00:10:16.680 "abort": true, 00:10:16.680 "seek_hole": false, 00:10:16.680 "seek_data": false, 00:10:16.680 "copy": true, 00:10:16.680 "nvme_iov_md": false 00:10:16.680 }, 00:10:16.680 "memory_domains": [ 00:10:16.680 { 00:10:16.680 "dma_device_id": "system", 00:10:16.680 "dma_device_type": 1 00:10:16.680 }, 00:10:16.680 { 00:10:16.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.680 "dma_device_type": 2 00:10:16.680 } 00:10:16.680 ], 00:10:16.680 "driver_specific": {} 00:10:16.680 } 00:10:16.680 ]' 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.680 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.052 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.052 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:18.052 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.052 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:18.052 10:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.952 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:20.209 10:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.143 ************************************ 00:10:21.143 START TEST filesystem_in_capsule_ext4 00:10:21.143 ************************************ 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:21.143 10:38:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:21.143 mke2fs 1.47.0 (5-Feb-2023) 00:10:21.401 Discarding device blocks: 0/522240 done 00:10:21.401 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:21.401 Filesystem UUID: 5dacc1f5-f67a-44e9-a2dc-5541faa59b62 00:10:21.401 Superblock backups stored on blocks: 00:10:21.401 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:21.401 00:10:21.401 Allocating group tables: 0/64 done 00:10:21.401 Writing inode tables: 0/64 done 00:10:21.659 Creating journal (8192 blocks): done 00:10:21.659 Writing superblocks and filesystem accounting information: 0/64 done 00:10:21.659 00:10:21.659 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:21.659 10:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2592311 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.919 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.177 00:10:27.177 real 0m5.841s 00:10:27.177 user 0m0.034s 00:10:27.177 sys 0m0.066s 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:27.177 ************************************ 00:10:27.177 END TEST filesystem_in_capsule_ext4 00:10:27.177 ************************************ 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.177 ************************************ 00:10:27.177 START TEST filesystem_in_capsule_btrfs 00:10:27.177 ************************************ 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:27.177 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:27.435 btrfs-progs v6.8.1 00:10:27.435 See https://btrfs.readthedocs.io for more information. 00:10:27.435 00:10:27.435 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:27.435 NOTE: several default settings have changed in version 5.15, please make sure 00:10:27.435 this does not affect your deployments: 00:10:27.435 - DUP for metadata (-m dup) 00:10:27.435 - enabled no-holes (-O no-holes) 00:10:27.435 - enabled free-space-tree (-R free-space-tree) 00:10:27.435 00:10:27.435 Label: (null) 00:10:27.435 UUID: 2fbc531e-325a-4c12-ac4e-15c7110b0ff4 00:10:27.435 Node size: 16384 00:10:27.435 Sector size: 4096 (CPU page size: 4096) 00:10:27.435 Filesystem size: 510.00MiB 00:10:27.435 Block group profiles: 00:10:27.435 Data: single 8.00MiB 00:10:27.435 Metadata: DUP 32.00MiB 00:10:27.435 System: DUP 8.00MiB 00:10:27.435 SSD detected: yes 00:10:27.435 Zoned device: no 00:10:27.435 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:27.435 Checksum: crc32c 00:10:27.435 Number of devices: 1 00:10:27.435 Devices: 00:10:27.435 ID SIZE PATH 00:10:27.435 1 510.00MiB /dev/nvme0n1p1 00:10:27.435 00:10:27.435 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:27.435 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2592311 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.001 00:10:28.001 real 0m0.837s 00:10:28.001 user 0m0.020s 00:10:28.001 sys 0m0.124s 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.001 ************************************ 00:10:28.001 END TEST filesystem_in_capsule_btrfs 00:10:28.001 ************************************ 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.001 ************************************ 00:10:28.001 START TEST filesystem_in_capsule_xfs 00:10:28.001 ************************************ 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:28.001 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:28.002 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:28.002 10:38:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:28.260 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:28.260 = sectsz=512 attr=2, projid32bit=1 00:10:28.260 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:28.260 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:28.260 data = bsize=4096 blocks=130560, imaxpct=25 00:10:28.260 = sunit=0 swidth=0 blks 00:10:28.260 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:28.260 log =internal log bsize=4096 blocks=16384, version=2 00:10:28.260 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:28.260 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:29.193 Discarding blocks...Done. 00:10:29.193 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:29.193 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2592311 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.721 00:10:31.721 real 0m3.670s 00:10:31.721 user 0m0.027s 00:10:31.721 sys 0m0.073s 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.721 ************************************ 00:10:31.721 END TEST filesystem_in_capsule_xfs 00:10:31.721 ************************************ 00:10:31.721 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.980 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.237 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2592311 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2592311 ']' 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2592311 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2592311 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2592311' 00:10:32.238 killing process with pid 2592311 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2592311 00:10:32.238 10:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2592311 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:32.496 00:10:32.496 real 0m16.395s 00:10:32.496 user 1m4.521s 00:10:32.496 sys 0m1.388s 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 ************************************ 00:10:32.496 END TEST nvmf_filesystem_in_capsule 00:10:32.496 ************************************ 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.496 rmmod nvme_tcp 00:10:32.496 rmmod nvme_fabrics 00:10:32.496 rmmod nvme_keyring 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.496 10:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.030 00:10:35.030 real 0m42.178s 00:10:35.030 user 2m13.828s 00:10:35.030 sys 0m7.465s 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.030 ************************************ 00:10:35.030 END TEST nvmf_filesystem 00:10:35.030 ************************************ 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.030 ************************************ 00:10:35.030 START TEST nvmf_target_discovery 00:10:35.030 ************************************ 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:35.030 * Looking for test storage... 00:10:35.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.030 --rc genhtml_branch_coverage=1 00:10:35.030 --rc genhtml_function_coverage=1 00:10:35.030 --rc genhtml_legend=1 00:10:35.030 --rc geninfo_all_blocks=1 00:10:35.030 --rc geninfo_unexecuted_blocks=1 00:10:35.030 00:10:35.030 ' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.030 --rc genhtml_branch_coverage=1 00:10:35.030 --rc genhtml_function_coverage=1 00:10:35.030 --rc genhtml_legend=1 00:10:35.030 --rc geninfo_all_blocks=1 00:10:35.030 --rc geninfo_unexecuted_blocks=1 00:10:35.030 00:10:35.030 ' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.030 --rc genhtml_branch_coverage=1 00:10:35.030 --rc genhtml_function_coverage=1 00:10:35.030 --rc genhtml_legend=1 00:10:35.030 --rc geninfo_all_blocks=1 00:10:35.030 --rc geninfo_unexecuted_blocks=1 00:10:35.030 00:10:35.030 ' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.030 --rc genhtml_branch_coverage=1 00:10:35.030 --rc genhtml_function_coverage=1 00:10:35.030 --rc genhtml_legend=1 00:10:35.030 --rc geninfo_all_blocks=1 00:10:35.030 --rc geninfo_unexecuted_blocks=1 00:10:35.030 00:10:35.030 ' 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.030 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.031 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:40.297 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.298 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.298 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:40.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:10:40.298 00:10:40.298 --- 10.0.0.2 ping statistics --- 00:10:40.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.298 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:10:40.298 00:10:40.298 --- 10.0.0.1 ping statistics --- 00:10:40.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.298 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2598818 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2598818 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2598818 ']' 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:40.298 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.298 [2024-11-07 10:39:07.795619] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:40.298 [2024-11-07 10:39:07.795672] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.298 [2024-11-07 10:39:07.863572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.298 [2024-11-07 10:39:07.908418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.298 [2024-11-07 10:39:07.908458] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.298 [2024-11-07 10:39:07.908465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.298 [2024-11-07 10:39:07.908472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.298 [2024-11-07 10:39:07.908478] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.298 [2024-11-07 10:39:07.910042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.298 [2024-11-07 10:39:07.910138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.298 [2024-11-07 10:39:07.910236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.298 [2024-11-07 10:39:07.910238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 [2024-11-07 10:39:08.047647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 Null1 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 [2024-11-07 10:39:08.093117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 Null2 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 Null3 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.557 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 Null4 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.558 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:40.816 00:10:40.816 Discovery Log Number of Records 6, Generation counter 6 00:10:40.816 =====Discovery Log Entry 0====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: current discovery subsystem 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4420 00:10:40.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: explicit discovery connections, duplicate discovery information 00:10:40.816 sectype: none 00:10:40.816 =====Discovery Log Entry 1====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: nvme subsystem 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4420 00:10:40.816 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: none 00:10:40.816 sectype: none 00:10:40.816 =====Discovery Log Entry 2====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: nvme subsystem 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4420 00:10:40.816 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: none 00:10:40.816 sectype: none 00:10:40.816 =====Discovery Log Entry 3====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: nvme subsystem 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4420 00:10:40.816 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: none 00:10:40.816 sectype: none 00:10:40.816 =====Discovery Log Entry 4====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: nvme subsystem 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4420 00:10:40.816 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: none 00:10:40.816 sectype: none 00:10:40.816 =====Discovery Log Entry 5====== 00:10:40.816 trtype: tcp 00:10:40.816 adrfam: ipv4 00:10:40.816 subtype: discovery subsystem referral 00:10:40.816 treq: not required 00:10:40.816 portid: 0 00:10:40.816 trsvcid: 4430 00:10:40.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:40.816 traddr: 10.0.0.2 00:10:40.816 eflags: none 00:10:40.816 sectype: none 00:10:40.816 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:40.816 Perform nvmf subsystem discovery via RPC 00:10:40.816 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:40.816 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.816 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.816 [ 00:10:40.816 { 00:10:40.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:40.816 "subtype": "Discovery", 00:10:40.816 "listen_addresses": [ 00:10:40.816 { 00:10:40.816 "trtype": "TCP", 00:10:40.816 "adrfam": "IPv4", 00:10:40.816 "traddr": "10.0.0.2", 00:10:40.816 "trsvcid": "4420" 00:10:40.816 } 00:10:40.816 ], 00:10:40.816 "allow_any_host": true, 00:10:40.816 "hosts": [] 00:10:40.816 }, 00:10:40.816 { 00:10:40.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:40.816 "subtype": "NVMe", 00:10:40.816 "listen_addresses": [ 00:10:40.816 { 00:10:40.816 "trtype": "TCP", 00:10:40.816 "adrfam": "IPv4", 00:10:40.816 "traddr": "10.0.0.2", 00:10:40.816 "trsvcid": "4420" 00:10:40.816 } 00:10:40.816 ], 00:10:40.816 "allow_any_host": true, 00:10:40.816 "hosts": [], 00:10:40.816 "serial_number": "SPDK00000000000001", 00:10:40.816 "model_number": "SPDK bdev Controller", 00:10:40.816 "max_namespaces": 32, 00:10:40.816 "min_cntlid": 1, 00:10:40.816 "max_cntlid": 65519, 00:10:40.816 "namespaces": [ 00:10:40.816 { 00:10:40.816 "nsid": 1, 00:10:40.816 "bdev_name": "Null1", 00:10:40.816 "name": "Null1", 00:10:40.816 "nguid": "1499A9CB4FCC4E3C88925A218B06FE51", 00:10:40.816 "uuid": "1499a9cb-4fcc-4e3c-8892-5a218b06fe51" 00:10:40.816 } 00:10:40.816 ] 00:10:40.816 }, 00:10:40.816 { 00:10:40.816 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.816 "subtype": "NVMe", 00:10:40.816 "listen_addresses": [ 00:10:40.816 { 00:10:40.816 "trtype": "TCP", 00:10:40.816 "adrfam": "IPv4", 00:10:40.816 "traddr": "10.0.0.2", 00:10:40.816 "trsvcid": "4420" 00:10:40.816 } 00:10:40.816 ], 00:10:40.816 "allow_any_host": true, 00:10:40.816 "hosts": [], 00:10:40.816 "serial_number": "SPDK00000000000002", 00:10:40.816 "model_number": "SPDK bdev Controller", 00:10:40.816 "max_namespaces": 32, 00:10:40.816 "min_cntlid": 1, 00:10:40.816 "max_cntlid": 65519, 00:10:40.816 "namespaces": [ 00:10:40.816 { 00:10:40.816 "nsid": 1, 00:10:40.816 "bdev_name": "Null2", 00:10:40.816 "name": "Null2", 00:10:40.816 "nguid": "98DBFB4A547045CE848F17ECF7F44025", 00:10:40.816 "uuid": "98dbfb4a-5470-45ce-848f-17ecf7f44025" 00:10:40.816 } 00:10:40.816 ] 00:10:40.816 }, 00:10:40.816 { 00:10:40.816 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:40.816 "subtype": "NVMe", 00:10:40.816 "listen_addresses": [ 00:10:40.816 { 00:10:40.816 "trtype": "TCP", 00:10:40.816 "adrfam": "IPv4", 00:10:40.816 "traddr": "10.0.0.2", 00:10:40.817 "trsvcid": "4420" 00:10:40.817 } 00:10:40.817 ], 00:10:40.817 "allow_any_host": true, 00:10:40.817 "hosts": [], 00:10:40.817 "serial_number": "SPDK00000000000003", 00:10:40.817 "model_number": "SPDK bdev Controller", 00:10:40.817 "max_namespaces": 32, 00:10:40.817 "min_cntlid": 1, 00:10:40.817 "max_cntlid": 65519, 00:10:40.817 "namespaces": [ 00:10:40.817 { 00:10:40.817 "nsid": 1, 00:10:40.817 "bdev_name": "Null3", 00:10:40.817 "name": "Null3", 00:10:40.817 "nguid": "2B872DF4B17048ECA15555343BDBC4FC", 00:10:40.817 "uuid": "2b872df4-b170-48ec-a155-55343bdbc4fc" 00:10:40.817 } 00:10:40.817 ] 00:10:40.817 }, 00:10:40.817 { 00:10:40.817 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:40.817 "subtype": "NVMe", 00:10:40.817 "listen_addresses": [ 00:10:40.817 { 00:10:40.817 "trtype": "TCP", 00:10:40.817 "adrfam": "IPv4", 00:10:40.817 "traddr": "10.0.0.2", 00:10:40.817 "trsvcid": "4420" 00:10:40.817 } 00:10:40.817 ], 00:10:40.817 "allow_any_host": true, 00:10:40.817 "hosts": [], 00:10:40.817 "serial_number": "SPDK00000000000004", 00:10:40.817 "model_number": "SPDK bdev Controller", 00:10:40.817 "max_namespaces": 32, 00:10:40.817 "min_cntlid": 1, 00:10:40.817 "max_cntlid": 65519, 00:10:40.817 "namespaces": [ 00:10:40.817 { 00:10:40.817 "nsid": 1, 00:10:40.817 "bdev_name": "Null4", 00:10:40.817 "name": "Null4", 00:10:40.817 "nguid": "9515CB376ED74E66837ADB331E11FB9F", 00:10:40.817 "uuid": "9515cb37-6ed7-4e66-837a-db331e11fb9f" 00:10:40.817 } 00:10:40.817 ] 00:10:40.817 } 00:10:40.817 ] 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.817 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.075 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.076 rmmod nvme_tcp 00:10:41.076 rmmod nvme_fabrics 00:10:41.076 rmmod nvme_keyring 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2598818 ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2598818 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2598818 ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2598818 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2598818 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2598818' 00:10:41.076 killing process with pid 2598818 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2598818 00:10:41.076 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2598818 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.334 10:39:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:43.864 00:10:43.864 real 0m8.653s 00:10:43.864 user 0m5.387s 00:10:43.864 sys 0m4.361s 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.864 ************************************ 00:10:43.864 END TEST nvmf_target_discovery 00:10:43.864 ************************************ 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:43.864 10:39:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:43.864 ************************************ 00:10:43.864 START TEST nvmf_referrals 00:10:43.864 ************************************ 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:43.864 * Looking for test storage... 00:10:43.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.864 --rc genhtml_branch_coverage=1 00:10:43.864 --rc genhtml_function_coverage=1 00:10:43.864 --rc genhtml_legend=1 00:10:43.864 --rc geninfo_all_blocks=1 00:10:43.864 --rc geninfo_unexecuted_blocks=1 00:10:43.864 00:10:43.864 ' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.864 --rc genhtml_branch_coverage=1 00:10:43.864 --rc genhtml_function_coverage=1 00:10:43.864 --rc genhtml_legend=1 00:10:43.864 --rc geninfo_all_blocks=1 00:10:43.864 --rc geninfo_unexecuted_blocks=1 00:10:43.864 00:10:43.864 ' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:43.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.864 --rc genhtml_branch_coverage=1 00:10:43.864 --rc genhtml_function_coverage=1 00:10:43.864 --rc genhtml_legend=1 00:10:43.864 --rc geninfo_all_blocks=1 00:10:43.864 --rc geninfo_unexecuted_blocks=1 00:10:43.864 00:10:43.864 ' 00:10:43.864 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.865 --rc genhtml_branch_coverage=1 00:10:43.865 --rc genhtml_function_coverage=1 00:10:43.865 --rc genhtml_legend=1 00:10:43.865 --rc geninfo_all_blocks=1 00:10:43.865 --rc geninfo_unexecuted_blocks=1 00:10:43.865 00:10:43.865 ' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:43.865 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:49.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:49.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:49.134 Found net devices under 0000:86:00.0: cvl_0_0 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.134 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:49.135 Found net devices under 0000:86:00.1: cvl_0_1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:49.135 00:10:49.135 --- 10.0.0.2 ping statistics --- 00:10:49.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.135 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:49.135 00:10:49.135 --- 10.0.0.1 ping statistics --- 00:10:49.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.135 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.135 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2602788 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2602788 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2602788 ']' 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:49.393 10:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.393 [2024-11-07 10:39:16.872728] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:49.393 [2024-11-07 10:39:16.872781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.393 [2024-11-07 10:39:16.939059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.393 [2024-11-07 10:39:16.980844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.393 [2024-11-07 10:39:16.980885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.393 [2024-11-07 10:39:16.980895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.393 [2024-11-07 10:39:16.980901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.393 [2024-11-07 10:39:16.980905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.394 [2024-11-07 10:39:16.982499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.394 [2024-11-07 10:39:16.982599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.394 [2024-11-07 10:39:16.982726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.394 [2024-11-07 10:39:16.982728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 [2024-11-07 10:39:17.127681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 [2024-11-07 10:39:17.141182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.652 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:49.910 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.168 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.426 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.684 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:50.942 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.200 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:51.457 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.457 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.715 rmmod nvme_tcp 00:10:51.715 rmmod nvme_fabrics 00:10:51.715 rmmod nvme_keyring 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2602788 ']' 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2602788 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2602788 ']' 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2602788 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:51.715 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2602788 00:10:51.716 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:51.716 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:51.716 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2602788' 00:10:51.716 killing process with pid 2602788 00:10:51.716 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2602788 00:10:51.716 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2602788 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.974 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.875 00:10:53.875 real 0m10.461s 00:10:53.875 user 0m12.041s 00:10:53.875 sys 0m4.891s 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.875 ************************************ 00:10:53.875 END TEST nvmf_referrals 00:10:53.875 ************************************ 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.875 ************************************ 00:10:53.875 START TEST nvmf_connect_disconnect 00:10:53.875 ************************************ 00:10:53.875 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:54.134 * Looking for test storage... 00:10:54.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.135 --rc genhtml_branch_coverage=1 00:10:54.135 --rc genhtml_function_coverage=1 00:10:54.135 --rc genhtml_legend=1 00:10:54.135 --rc geninfo_all_blocks=1 00:10:54.135 --rc geninfo_unexecuted_blocks=1 00:10:54.135 00:10:54.135 ' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.135 --rc genhtml_branch_coverage=1 00:10:54.135 --rc genhtml_function_coverage=1 00:10:54.135 --rc genhtml_legend=1 00:10:54.135 --rc geninfo_all_blocks=1 00:10:54.135 --rc geninfo_unexecuted_blocks=1 00:10:54.135 00:10:54.135 ' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.135 --rc genhtml_branch_coverage=1 00:10:54.135 --rc genhtml_function_coverage=1 00:10:54.135 --rc genhtml_legend=1 00:10:54.135 --rc geninfo_all_blocks=1 00:10:54.135 --rc geninfo_unexecuted_blocks=1 00:10:54.135 00:10:54.135 ' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.135 --rc genhtml_branch_coverage=1 00:10:54.135 --rc genhtml_function_coverage=1 00:10:54.135 --rc genhtml_legend=1 00:10:54.135 --rc geninfo_all_blocks=1 00:10:54.135 --rc geninfo_unexecuted_blocks=1 00:10:54.135 00:10:54.135 ' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.135 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.136 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.402 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:59.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:59.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:59.403 Found net devices under 0000:86:00.0: cvl_0_0 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:59.403 Found net devices under 0000:86:00.1: cvl_0_1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:10:59.403 00:10:59.403 --- 10.0.0.2 ping statistics --- 00:10:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.403 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:59.403 00:10:59.403 --- 10.0.0.1 ping statistics --- 00:10:59.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.403 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.403 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2606750 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2606750 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2606750 ']' 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.404 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.404 [2024-11-07 10:39:26.979498] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:10:59.404 [2024-11-07 10:39:26.979547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.404 [2024-11-07 10:39:27.049071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.662 [2024-11-07 10:39:27.093070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.662 [2024-11-07 10:39:27.093105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.662 [2024-11-07 10:39:27.093112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.662 [2024-11-07 10:39:27.093118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.662 [2024-11-07 10:39:27.093123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.662 [2024-11-07 10:39:27.094590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.663 [2024-11-07 10:39:27.094611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.663 [2024-11-07 10:39:27.094685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.663 [2024-11-07 10:39:27.094686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 [2024-11-07 10:39:27.244167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.663 [2024-11-07 10:39:27.308095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:59.663 10:39:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:02.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.160 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:16.160 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:16.160 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.161 rmmod nvme_tcp 00:11:16.161 rmmod nvme_fabrics 00:11:16.161 rmmod nvme_keyring 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2606750 ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2606750 ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2606750' 00:11:16.161 killing process with pid 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2606750 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.161 10:39:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.695 00:11:18.695 real 0m24.356s 00:11:18.695 user 1m7.717s 00:11:18.695 sys 0m5.347s 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 ************************************ 00:11:18.695 END TEST nvmf_connect_disconnect 00:11:18.695 ************************************ 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.695 ************************************ 00:11:18.695 START TEST nvmf_multitarget 00:11:18.695 ************************************ 00:11:18.695 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:18.695 * Looking for test storage... 00:11:18.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.695 --rc genhtml_branch_coverage=1 00:11:18.695 --rc genhtml_function_coverage=1 00:11:18.695 --rc genhtml_legend=1 00:11:18.695 --rc geninfo_all_blocks=1 00:11:18.695 --rc geninfo_unexecuted_blocks=1 00:11:18.695 00:11:18.695 ' 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.695 --rc genhtml_branch_coverage=1 00:11:18.695 --rc genhtml_function_coverage=1 00:11:18.695 --rc genhtml_legend=1 00:11:18.695 --rc geninfo_all_blocks=1 00:11:18.695 --rc geninfo_unexecuted_blocks=1 00:11:18.695 00:11:18.695 ' 00:11:18.695 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.695 --rc genhtml_branch_coverage=1 00:11:18.695 --rc genhtml_function_coverage=1 00:11:18.695 --rc genhtml_legend=1 00:11:18.696 --rc geninfo_all_blocks=1 00:11:18.696 --rc geninfo_unexecuted_blocks=1 00:11:18.696 00:11:18.696 ' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.696 --rc genhtml_branch_coverage=1 00:11:18.696 --rc genhtml_function_coverage=1 00:11:18.696 --rc genhtml_legend=1 00:11:18.696 --rc geninfo_all_blocks=1 00:11:18.696 --rc geninfo_unexecuted_blocks=1 00:11:18.696 00:11:18.696 ' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.696 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:25.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:25.261 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.261 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:25.262 Found net devices under 0000:86:00.0: cvl_0_0 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:25.262 Found net devices under 0000:86:00.1: cvl_0_1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:11:25.262 00:11:25.262 --- 10.0.0.2 ping statistics --- 00:11:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.262 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:11:25.262 00:11:25.262 --- 10.0.0.1 ping statistics --- 00:11:25.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.262 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.262 10:39:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2613069 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2613069 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2613069 ']' 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:25.262 [2024-11-07 10:39:52.086025] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:11:25.262 [2024-11-07 10:39:52.086079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.262 [2024-11-07 10:39:52.155802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.262 [2024-11-07 10:39:52.198290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.262 [2024-11-07 10:39:52.198334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.262 [2024-11-07 10:39:52.198341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.262 [2024-11-07 10:39:52.198347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.262 [2024-11-07 10:39:52.198352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.262 [2024-11-07 10:39:52.199890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.262 [2024-11-07 10:39:52.199986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.262 [2024-11-07 10:39:52.200076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.262 [2024-11-07 10:39:52.200078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.262 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:25.263 "nvmf_tgt_1" 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:25.263 "nvmf_tgt_2" 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:25.263 true 00:11:25.263 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:25.520 true 00:11:25.520 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:25.520 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.520 rmmod nvme_tcp 00:11:25.520 rmmod nvme_fabrics 00:11:25.520 rmmod nvme_keyring 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2613069 ']' 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2613069 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2613069 ']' 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2613069 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:25.520 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2613069 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2613069' 00:11:25.778 killing process with pid 2613069 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2613069 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2613069 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.778 10:39:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.311 00:11:28.311 real 0m9.486s 00:11:28.311 user 0m7.104s 00:11:28.311 sys 0m4.847s 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:28.311 ************************************ 00:11:28.311 END TEST nvmf_multitarget 00:11:28.311 ************************************ 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.311 ************************************ 00:11:28.311 START TEST nvmf_rpc 00:11:28.311 ************************************ 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:28.311 * Looking for test storage... 00:11:28.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.311 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:28.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.312 --rc genhtml_branch_coverage=1 00:11:28.312 --rc genhtml_function_coverage=1 00:11:28.312 --rc genhtml_legend=1 00:11:28.312 --rc geninfo_all_blocks=1 00:11:28.312 --rc geninfo_unexecuted_blocks=1 00:11:28.312 00:11:28.312 ' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:28.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.312 --rc genhtml_branch_coverage=1 00:11:28.312 --rc genhtml_function_coverage=1 00:11:28.312 --rc genhtml_legend=1 00:11:28.312 --rc geninfo_all_blocks=1 00:11:28.312 --rc geninfo_unexecuted_blocks=1 00:11:28.312 00:11:28.312 ' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:28.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.312 --rc genhtml_branch_coverage=1 00:11:28.312 --rc genhtml_function_coverage=1 00:11:28.312 --rc genhtml_legend=1 00:11:28.312 --rc geninfo_all_blocks=1 00:11:28.312 --rc geninfo_unexecuted_blocks=1 00:11:28.312 00:11:28.312 ' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:28.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.312 --rc genhtml_branch_coverage=1 00:11:28.312 --rc genhtml_function_coverage=1 00:11:28.312 --rc genhtml_legend=1 00:11:28.312 --rc geninfo_all_blocks=1 00:11:28.312 --rc geninfo_unexecuted_blocks=1 00:11:28.312 00:11:28.312 ' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.312 10:39:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:33.583 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.584 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.584 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.584 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.584 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.584 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.585 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:11:33.845 00:11:33.845 --- 10.0.0.2 ping statistics --- 00:11:33.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.845 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:11:33.845 00:11:33.845 --- 10.0.0.1 ping statistics --- 00:11:33.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.845 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2616820 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2616820 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2616820 ']' 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:33.845 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.845 [2024-11-07 10:40:01.412183] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:11:33.845 [2024-11-07 10:40:01.412231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.845 [2024-11-07 10:40:01.480445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.103 [2024-11-07 10:40:01.523876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.103 [2024-11-07 10:40:01.523912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.103 [2024-11-07 10:40:01.523921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.103 [2024-11-07 10:40:01.523927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.103 [2024-11-07 10:40:01.523932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.103 [2024-11-07 10:40:01.525458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.103 [2024-11-07 10:40:01.525484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.103 [2024-11-07 10:40:01.525569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.103 [2024-11-07 10:40:01.525571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:34.103 "tick_rate": 2300000000, 00:11:34.103 "poll_groups": [ 00:11:34.103 { 00:11:34.103 "name": "nvmf_tgt_poll_group_000", 00:11:34.103 "admin_qpairs": 0, 00:11:34.103 "io_qpairs": 0, 00:11:34.103 "current_admin_qpairs": 0, 00:11:34.103 "current_io_qpairs": 0, 00:11:34.103 "pending_bdev_io": 0, 00:11:34.103 "completed_nvme_io": 0, 00:11:34.103 "transports": [] 00:11:34.103 }, 00:11:34.103 { 00:11:34.103 "name": "nvmf_tgt_poll_group_001", 00:11:34.103 "admin_qpairs": 0, 00:11:34.103 "io_qpairs": 0, 00:11:34.103 "current_admin_qpairs": 0, 00:11:34.103 "current_io_qpairs": 0, 00:11:34.103 "pending_bdev_io": 0, 00:11:34.103 "completed_nvme_io": 0, 00:11:34.103 "transports": [] 00:11:34.103 }, 00:11:34.103 { 00:11:34.103 "name": "nvmf_tgt_poll_group_002", 00:11:34.103 "admin_qpairs": 0, 00:11:34.103 "io_qpairs": 0, 00:11:34.103 "current_admin_qpairs": 0, 00:11:34.103 "current_io_qpairs": 0, 00:11:34.103 "pending_bdev_io": 0, 00:11:34.103 "completed_nvme_io": 0, 00:11:34.103 "transports": [] 00:11:34.103 }, 00:11:34.103 { 00:11:34.103 "name": "nvmf_tgt_poll_group_003", 00:11:34.103 "admin_qpairs": 0, 00:11:34.103 "io_qpairs": 0, 00:11:34.103 "current_admin_qpairs": 0, 00:11:34.103 "current_io_qpairs": 0, 00:11:34.103 "pending_bdev_io": 0, 00:11:34.103 "completed_nvme_io": 0, 00:11:34.103 "transports": [] 00:11:34.103 } 00:11:34.103 ] 00:11:34.103 }' 00:11:34.103 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:34.104 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:34.104 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:34.104 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:34.104 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:34.104 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.361 [2024-11-07 10:40:01.782915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.361 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:34.361 "tick_rate": 2300000000, 00:11:34.361 "poll_groups": [ 00:11:34.361 { 00:11:34.361 "name": "nvmf_tgt_poll_group_000", 00:11:34.361 "admin_qpairs": 0, 00:11:34.361 "io_qpairs": 0, 00:11:34.361 "current_admin_qpairs": 0, 00:11:34.361 "current_io_qpairs": 0, 00:11:34.361 "pending_bdev_io": 0, 00:11:34.361 "completed_nvme_io": 0, 00:11:34.361 "transports": [ 00:11:34.361 { 00:11:34.361 "trtype": "TCP" 00:11:34.361 } 00:11:34.361 ] 00:11:34.361 }, 00:11:34.361 { 00:11:34.361 "name": "nvmf_tgt_poll_group_001", 00:11:34.361 "admin_qpairs": 0, 00:11:34.361 "io_qpairs": 0, 00:11:34.361 "current_admin_qpairs": 0, 00:11:34.361 "current_io_qpairs": 0, 00:11:34.361 "pending_bdev_io": 0, 00:11:34.361 "completed_nvme_io": 0, 00:11:34.361 "transports": [ 00:11:34.361 { 00:11:34.361 "trtype": "TCP" 00:11:34.361 } 00:11:34.361 ] 00:11:34.361 }, 00:11:34.362 { 00:11:34.362 "name": "nvmf_tgt_poll_group_002", 00:11:34.362 "admin_qpairs": 0, 00:11:34.362 "io_qpairs": 0, 00:11:34.362 "current_admin_qpairs": 0, 00:11:34.362 "current_io_qpairs": 0, 00:11:34.362 "pending_bdev_io": 0, 00:11:34.362 "completed_nvme_io": 0, 00:11:34.362 "transports": [ 00:11:34.362 { 00:11:34.362 "trtype": "TCP" 00:11:34.362 } 00:11:34.362 ] 00:11:34.362 }, 00:11:34.362 { 00:11:34.362 "name": "nvmf_tgt_poll_group_003", 00:11:34.362 "admin_qpairs": 0, 00:11:34.362 "io_qpairs": 0, 00:11:34.362 "current_admin_qpairs": 0, 00:11:34.362 "current_io_qpairs": 0, 00:11:34.362 "pending_bdev_io": 0, 00:11:34.362 "completed_nvme_io": 0, 00:11:34.362 "transports": [ 00:11:34.362 { 00:11:34.362 "trtype": "TCP" 00:11:34.362 } 00:11:34.362 ] 00:11:34.362 } 00:11:34.362 ] 00:11:34.362 }' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 Malloc1 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 [2024-11-07 10:40:01.962918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:34.362 10:40:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:34.362 [2024-11-07 10:40:01.997507] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:34.362 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:34.362 could not add new controller: failed to write to nvme-fabrics device 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.362 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.620 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.620 10:40:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.557 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.558 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:35.558 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.558 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:35.558 10:40:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:38.086 [2024-11-07 10:40:05.319652] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:38.086 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:38.086 could not add new controller: failed to write to nvme-fabrics device 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.086 10:40:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.019 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.020 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:39.020 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.020 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:39.020 10:40:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.921 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.179 [2024-11-07 10:40:08.620268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.179 10:40:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.120 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.120 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:42.120 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.120 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:42.120 10:40:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:44.645 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 [2024-11-07 10:40:11.908803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.646 10:40:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.578 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.579 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:45.579 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.579 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:45.579 10:40:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:47.475 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 [2024-11-07 10:40:15.258705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.734 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.735 10:40:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.106 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.106 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:49.106 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.106 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:49.106 10:40:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 [2024-11-07 10:40:18.563774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 10:40:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.440 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:52.440 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:52.440 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.440 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:52.440 10:40:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 [2024-11-07 10:40:21.813394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.338 10:40:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.271 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.271 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:55.271 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.271 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:55.271 10:40:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.798 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:57.799 10:40:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 [2024-11-07 10:40:25.124403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 [2024-11-07 10:40:25.172480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 [2024-11-07 10:40:25.220603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.799 [2024-11-07 10:40:25.268768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.799 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 [2024-11-07 10:40:25.316915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:57.800 "tick_rate": 2300000000, 00:11:57.800 "poll_groups": [ 00:11:57.800 { 00:11:57.800 "name": "nvmf_tgt_poll_group_000", 00:11:57.800 "admin_qpairs": 2, 00:11:57.800 "io_qpairs": 168, 00:11:57.800 "current_admin_qpairs": 0, 00:11:57.800 "current_io_qpairs": 0, 00:11:57.800 "pending_bdev_io": 0, 00:11:57.800 "completed_nvme_io": 268, 00:11:57.800 "transports": [ 00:11:57.800 { 00:11:57.800 "trtype": "TCP" 00:11:57.800 } 00:11:57.800 ] 00:11:57.800 }, 00:11:57.800 { 00:11:57.800 "name": "nvmf_tgt_poll_group_001", 00:11:57.800 "admin_qpairs": 2, 00:11:57.800 "io_qpairs": 168, 00:11:57.800 "current_admin_qpairs": 0, 00:11:57.800 "current_io_qpairs": 0, 00:11:57.800 "pending_bdev_io": 0, 00:11:57.800 "completed_nvme_io": 318, 00:11:57.800 "transports": [ 00:11:57.800 { 00:11:57.800 "trtype": "TCP" 00:11:57.800 } 00:11:57.800 ] 00:11:57.800 }, 00:11:57.800 { 00:11:57.800 "name": "nvmf_tgt_poll_group_002", 00:11:57.800 "admin_qpairs": 1, 00:11:57.800 "io_qpairs": 168, 00:11:57.800 "current_admin_qpairs": 0, 00:11:57.800 "current_io_qpairs": 0, 00:11:57.800 "pending_bdev_io": 0, 00:11:57.800 "completed_nvme_io": 218, 00:11:57.800 "transports": [ 00:11:57.800 { 00:11:57.800 "trtype": "TCP" 00:11:57.800 } 00:11:57.800 ] 00:11:57.800 }, 00:11:57.800 { 00:11:57.800 "name": "nvmf_tgt_poll_group_003", 00:11:57.800 "admin_qpairs": 2, 00:11:57.800 "io_qpairs": 168, 00:11:57.800 "current_admin_qpairs": 0, 00:11:57.800 "current_io_qpairs": 0, 00:11:57.800 "pending_bdev_io": 0, 00:11:57.800 "completed_nvme_io": 218, 00:11:57.800 "transports": [ 00:11:57.800 { 00:11:57.800 "trtype": "TCP" 00:11:57.800 } 00:11:57.800 ] 00:11:57.800 } 00:11:57.800 ] 00:11:57.800 }' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:57.800 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.059 rmmod nvme_tcp 00:11:58.059 rmmod nvme_fabrics 00:11:58.059 rmmod nvme_keyring 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2616820 ']' 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2616820 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2616820 ']' 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2616820 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2616820 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2616820' 00:11:58.059 killing process with pid 2616820 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2616820 00:11:58.059 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2616820 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.317 10:40:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.219 00:12:00.219 real 0m32.324s 00:12:00.219 user 1m38.140s 00:12:00.219 sys 0m6.263s 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.219 ************************************ 00:12:00.219 END TEST nvmf_rpc 00:12:00.219 ************************************ 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.219 10:40:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.478 ************************************ 00:12:00.478 START TEST nvmf_invalid 00:12:00.478 ************************************ 00:12:00.478 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:00.478 * Looking for test storage... 00:12:00.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.478 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:00.478 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:00.478 10:40:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.478 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.479 --rc genhtml_branch_coverage=1 00:12:00.479 --rc genhtml_function_coverage=1 00:12:00.479 --rc genhtml_legend=1 00:12:00.479 --rc geninfo_all_blocks=1 00:12:00.479 --rc geninfo_unexecuted_blocks=1 00:12:00.479 00:12:00.479 ' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.479 --rc genhtml_branch_coverage=1 00:12:00.479 --rc genhtml_function_coverage=1 00:12:00.479 --rc genhtml_legend=1 00:12:00.479 --rc geninfo_all_blocks=1 00:12:00.479 --rc geninfo_unexecuted_blocks=1 00:12:00.479 00:12:00.479 ' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.479 --rc genhtml_branch_coverage=1 00:12:00.479 --rc genhtml_function_coverage=1 00:12:00.479 --rc genhtml_legend=1 00:12:00.479 --rc geninfo_all_blocks=1 00:12:00.479 --rc geninfo_unexecuted_blocks=1 00:12:00.479 00:12:00.479 ' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.479 --rc genhtml_branch_coverage=1 00:12:00.479 --rc genhtml_function_coverage=1 00:12:00.479 --rc genhtml_legend=1 00:12:00.479 --rc geninfo_all_blocks=1 00:12:00.479 --rc geninfo_unexecuted_blocks=1 00:12:00.479 00:12:00.479 ' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.479 10:40:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.040 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.040 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.041 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.041 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.041 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:12:07.041 00:12:07.041 --- 10.0.0.2 ping statistics --- 00:12:07.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.041 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:07.041 00:12:07.041 --- 10.0.0.1 ping statistics --- 00:12:07.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.041 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2624427 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2624427 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2624427 ']' 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:07.041 10:40:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.041 [2024-11-07 10:40:33.991532] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:07.041 [2024-11-07 10:40:33.991574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.041 [2024-11-07 10:40:34.056248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.041 [2024-11-07 10:40:34.100408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.041 [2024-11-07 10:40:34.100449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.041 [2024-11-07 10:40:34.100458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.041 [2024-11-07 10:40:34.100465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.041 [2024-11-07 10:40:34.100473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.041 [2024-11-07 10:40:34.101859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.041 [2024-11-07 10:40:34.101966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.041 [2024-11-07 10:40:34.102078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.041 [2024-11-07 10:40:34.102079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:07.041 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22723 00:12:07.042 [2024-11-07 10:40:34.423767] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:07.042 { 00:12:07.042 "nqn": "nqn.2016-06.io.spdk:cnode22723", 00:12:07.042 "tgt_name": "foobar", 00:12:07.042 "method": "nvmf_create_subsystem", 00:12:07.042 "req_id": 1 00:12:07.042 } 00:12:07.042 Got JSON-RPC error response 00:12:07.042 response: 00:12:07.042 { 00:12:07.042 "code": -32603, 00:12:07.042 "message": "Unable to find target foobar" 00:12:07.042 }' 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:07.042 { 00:12:07.042 "nqn": "nqn.2016-06.io.spdk:cnode22723", 00:12:07.042 "tgt_name": "foobar", 00:12:07.042 "method": "nvmf_create_subsystem", 00:12:07.042 "req_id": 1 00:12:07.042 } 00:12:07.042 Got JSON-RPC error response 00:12:07.042 response: 00:12:07.042 { 00:12:07.042 "code": -32603, 00:12:07.042 "message": "Unable to find target foobar" 00:12:07.042 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode942 00:12:07.042 [2024-11-07 10:40:34.628488] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode942: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:07.042 { 00:12:07.042 "nqn": "nqn.2016-06.io.spdk:cnode942", 00:12:07.042 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.042 "method": "nvmf_create_subsystem", 00:12:07.042 "req_id": 1 00:12:07.042 } 00:12:07.042 Got JSON-RPC error response 00:12:07.042 response: 00:12:07.042 { 00:12:07.042 "code": -32602, 00:12:07.042 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.042 }' 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:07.042 { 00:12:07.042 "nqn": "nqn.2016-06.io.spdk:cnode942", 00:12:07.042 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.042 "method": "nvmf_create_subsystem", 00:12:07.042 "req_id": 1 00:12:07.042 } 00:12:07.042 Got JSON-RPC error response 00:12:07.042 response: 00:12:07.042 { 00:12:07.042 "code": -32602, 00:12:07.042 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.042 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:07.042 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31001 00:12:07.299 [2024-11-07 10:40:34.833161] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31001: invalid model number 'SPDK_Controller' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:07.299 { 00:12:07.299 "nqn": "nqn.2016-06.io.spdk:cnode31001", 00:12:07.299 "model_number": "SPDK_Controller\u001f", 00:12:07.299 "method": "nvmf_create_subsystem", 00:12:07.299 "req_id": 1 00:12:07.299 } 00:12:07.299 Got JSON-RPC error response 00:12:07.299 response: 00:12:07.299 { 00:12:07.299 "code": -32602, 00:12:07.299 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.299 }' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:07.299 { 00:12:07.299 "nqn": "nqn.2016-06.io.spdk:cnode31001", 00:12:07.299 "model_number": "SPDK_Controller\u001f", 00:12:07.299 "method": "nvmf_create_subsystem", 00:12:07.299 "req_id": 1 00:12:07.299 } 00:12:07.299 Got JSON-RPC error response 00:12:07.299 response: 00:12:07.299 { 00:12:07.299 "code": -32602, 00:12:07.299 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.299 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:07.299 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.300 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:07.557 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.558 10:40:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'O^v#^:A>.|=u_x;#VF$;h' 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'O^v#^:A>.|=u_x;#VF$;h' nqn.2016-06.io.spdk:cnode17720 00:12:07.558 [2024-11-07 10:40:35.182408] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17720: invalid serial number 'O^v#^:A>.|=u_x;#VF$;h' 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:07.558 { 00:12:07.558 "nqn": "nqn.2016-06.io.spdk:cnode17720", 00:12:07.558 "serial_number": "O^v#^:A>.|=u_x;#VF$;h", 00:12:07.558 "method": "nvmf_create_subsystem", 00:12:07.558 "req_id": 1 00:12:07.558 } 00:12:07.558 Got JSON-RPC error response 00:12:07.558 response: 00:12:07.558 { 00:12:07.558 "code": -32602, 00:12:07.558 "message": "Invalid SN O^v#^:A>.|=u_x;#VF$;h" 00:12:07.558 }' 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:07.558 { 00:12:07.558 "nqn": "nqn.2016-06.io.spdk:cnode17720", 00:12:07.558 "serial_number": "O^v#^:A>.|=u_x;#VF$;h", 00:12:07.558 "method": "nvmf_create_subsystem", 00:12:07.558 "req_id": 1 00:12:07.558 } 00:12:07.558 Got JSON-RPC error response 00:12:07.558 response: 00:12:07.558 { 00:12:07.558 "code": -32602, 00:12:07.558 "message": "Invalid SN O^v#^:A>.|=u_x;#VF$;h" 00:12:07.558 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:07.558 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:07.817 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:12:07.818 10:40:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ds7LJ3]8u:.[g|2g<{FQj5ZH>/dh;Lm/dh;Lm/dh;Lm/dh;Lm/dh;Lm /dev/null' 00:12:10.138 10:40:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.669 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.669 00:12:12.669 real 0m11.905s 00:12:12.669 user 0m18.613s 00:12:12.669 sys 0m5.388s 00:12:12.669 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.669 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:12.669 ************************************ 00:12:12.669 END TEST nvmf_invalid 00:12:12.669 ************************************ 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.670 ************************************ 00:12:12.670 START TEST nvmf_connect_stress 00:12:12.670 ************************************ 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:12.670 * Looking for test storage... 00:12:12.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:12.670 10:40:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:12.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.670 --rc genhtml_branch_coverage=1 00:12:12.670 --rc genhtml_function_coverage=1 00:12:12.670 --rc genhtml_legend=1 00:12:12.670 --rc geninfo_all_blocks=1 00:12:12.670 --rc geninfo_unexecuted_blocks=1 00:12:12.670 00:12:12.670 ' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:12.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.670 --rc genhtml_branch_coverage=1 00:12:12.670 --rc genhtml_function_coverage=1 00:12:12.670 --rc genhtml_legend=1 00:12:12.670 --rc geninfo_all_blocks=1 00:12:12.670 --rc geninfo_unexecuted_blocks=1 00:12:12.670 00:12:12.670 ' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:12.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.670 --rc genhtml_branch_coverage=1 00:12:12.670 --rc genhtml_function_coverage=1 00:12:12.670 --rc genhtml_legend=1 00:12:12.670 --rc geninfo_all_blocks=1 00:12:12.670 --rc geninfo_unexecuted_blocks=1 00:12:12.670 00:12:12.670 ' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:12.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.670 --rc genhtml_branch_coverage=1 00:12:12.670 --rc genhtml_function_coverage=1 00:12:12.670 --rc genhtml_legend=1 00:12:12.670 --rc geninfo_all_blocks=1 00:12:12.670 --rc geninfo_unexecuted_blocks=1 00:12:12.670 00:12:12.670 ' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.670 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.671 10:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:17.938 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:17.938 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:17.938 Found net devices under 0000:86:00.0: cvl_0_0 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:17.938 Found net devices under 0000:86:00.1: cvl_0_1 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.938 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:12:17.939 00:12:17.939 --- 10.0.0.2 ping statistics --- 00:12:17.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.939 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:17.939 00:12:17.939 --- 10.0.0.1 ping statistics --- 00:12:17.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.939 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2628803 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2628803 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2628803 ']' 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.939 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.198 [2024-11-07 10:40:45.626444] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:18.198 [2024-11-07 10:40:45.626492] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.198 [2024-11-07 10:40:45.692630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:18.198 [2024-11-07 10:40:45.732402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.198 [2024-11-07 10:40:45.732445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.198 [2024-11-07 10:40:45.732453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.198 [2024-11-07 10:40:45.732459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.198 [2024-11-07 10:40:45.732464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.198 [2024-11-07 10:40:45.733783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.198 [2024-11-07 10:40:45.733851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.198 [2024-11-07 10:40:45.733853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.198 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:18.198 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:18.198 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.198 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.198 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 [2024-11-07 10:40:45.877547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 [2024-11-07 10:40:45.897794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.456 NULL1 00:12:18.456 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2628832 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:18.457 10:40:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.457 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.457 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.715 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.715 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:18.715 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.715 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.715 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.281 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.281 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:19.281 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.281 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.281 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.538 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.538 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:19.538 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.538 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.538 10:40:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.796 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.796 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:19.796 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.796 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.796 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.053 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.053 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:20.053 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.053 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.053 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.311 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.311 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:20.311 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.311 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.311 10:40:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.875 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.875 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:20.875 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.875 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.875 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.133 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.133 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:21.133 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.133 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.133 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.391 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.391 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:21.391 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.391 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.391 10:40:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.648 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.648 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:21.648 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.648 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.648 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.906 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.167 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:22.167 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.167 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.167 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.424 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.424 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:22.424 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.424 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.424 10:40:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.682 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.682 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:22.682 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.682 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.682 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.940 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.940 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:22.940 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.940 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.940 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.505 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.505 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:23.505 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.505 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.505 10:40:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.762 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.762 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:23.762 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.762 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.762 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.020 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.020 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:24.020 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.020 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.020 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.278 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.278 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:24.278 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.278 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.278 10:40:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.536 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.536 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:24.536 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.536 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.536 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.101 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.101 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:25.101 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.102 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.102 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.359 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.359 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:25.359 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.359 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.359 10:40:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.617 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.617 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:25.617 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.617 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.617 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.875 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.875 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:25.875 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.875 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.875 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.441 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.441 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:26.441 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.441 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.441 10:40:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.699 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.699 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:26.699 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.699 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.699 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.957 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.957 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:26.957 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.957 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.957 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.216 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:27.216 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.216 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.216 10:40:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.473 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.473 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:27.473 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.473 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.473 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.038 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.038 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:28.038 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.038 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.038 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.296 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.296 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:28.296 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.296 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.296 10:40:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.554 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2628832 00:12:28.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2628832) - No such process 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2628832 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.554 rmmod nvme_tcp 00:12:28.554 rmmod nvme_fabrics 00:12:28.554 rmmod nvme_keyring 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2628803 ']' 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2628803 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2628803 ']' 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2628803 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2628803 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2628803' 00:12:28.554 killing process with pid 2628803 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2628803 00:12:28.554 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2628803 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.813 10:40:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.345 00:12:31.345 real 0m18.582s 00:12:31.345 user 0m39.067s 00:12:31.345 sys 0m8.265s 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.345 ************************************ 00:12:31.345 END TEST nvmf_connect_stress 00:12:31.345 ************************************ 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.345 ************************************ 00:12:31.345 START TEST nvmf_fused_ordering 00:12:31.345 ************************************ 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:31.345 * Looking for test storage... 00:12:31.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.345 --rc genhtml_branch_coverage=1 00:12:31.345 --rc genhtml_function_coverage=1 00:12:31.345 --rc genhtml_legend=1 00:12:31.345 --rc geninfo_all_blocks=1 00:12:31.345 --rc geninfo_unexecuted_blocks=1 00:12:31.345 00:12:31.345 ' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.345 --rc genhtml_branch_coverage=1 00:12:31.345 --rc genhtml_function_coverage=1 00:12:31.345 --rc genhtml_legend=1 00:12:31.345 --rc geninfo_all_blocks=1 00:12:31.345 --rc geninfo_unexecuted_blocks=1 00:12:31.345 00:12:31.345 ' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.345 --rc genhtml_branch_coverage=1 00:12:31.345 --rc genhtml_function_coverage=1 00:12:31.345 --rc genhtml_legend=1 00:12:31.345 --rc geninfo_all_blocks=1 00:12:31.345 --rc geninfo_unexecuted_blocks=1 00:12:31.345 00:12:31.345 ' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.345 --rc genhtml_branch_coverage=1 00:12:31.345 --rc genhtml_function_coverage=1 00:12:31.345 --rc genhtml_legend=1 00:12:31.345 --rc geninfo_all_blocks=1 00:12:31.345 --rc geninfo_unexecuted_blocks=1 00:12:31.345 00:12:31.345 ' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:31.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:31.345 10:40:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.663 10:41:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.663 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:36.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:36.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:36.664 Found net devices under 0000:86:00.0: cvl_0_0 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:36.664 Found net devices under 0000:86:00.1: cvl_0_1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:12:36.664 00:12:36.664 --- 10.0.0.2 ping statistics --- 00:12:36.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.664 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:12:36.664 00:12:36.664 --- 10.0.0.1 ping statistics --- 00:12:36.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.664 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2633988 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2633988 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2633988 ']' 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:36.664 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.665 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:36.665 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.924 [2024-11-07 10:41:04.350452] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:36.924 [2024-11-07 10:41:04.350508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.924 [2024-11-07 10:41:04.417123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.924 [2024-11-07 10:41:04.460167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.924 [2024-11-07 10:41:04.460202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.924 [2024-11-07 10:41:04.460210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.924 [2024-11-07 10:41:04.460216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.924 [2024-11-07 10:41:04.460221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.924 [2024-11-07 10:41:04.460785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.924 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.924 [2024-11-07 10:41:04.592100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:37.182 [2024-11-07 10:41:04.612313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:37.182 NULL1 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.182 10:41:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:37.182 [2024-11-07 10:41:04.670359] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:37.182 [2024-11-07 10:41:04.670391] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634011 ] 00:12:37.441 Attached to nqn.2016-06.io.spdk:cnode1 00:12:37.441 Namespace ID: 1 size: 1GB 00:12:37.441 fused_ordering(0) 00:12:37.441 fused_ordering(1) 00:12:37.441 fused_ordering(2) 00:12:37.441 fused_ordering(3) 00:12:37.441 fused_ordering(4) 00:12:37.441 fused_ordering(5) 00:12:37.441 fused_ordering(6) 00:12:37.441 fused_ordering(7) 00:12:37.441 fused_ordering(8) 00:12:37.441 fused_ordering(9) 00:12:37.441 fused_ordering(10) 00:12:37.441 fused_ordering(11) 00:12:37.441 fused_ordering(12) 00:12:37.441 fused_ordering(13) 00:12:37.441 fused_ordering(14) 00:12:37.441 fused_ordering(15) 00:12:37.441 fused_ordering(16) 00:12:37.441 fused_ordering(17) 00:12:37.441 fused_ordering(18) 00:12:37.441 fused_ordering(19) 00:12:37.441 fused_ordering(20) 00:12:37.441 fused_ordering(21) 00:12:37.441 fused_ordering(22) 00:12:37.441 fused_ordering(23) 00:12:37.441 fused_ordering(24) 00:12:37.441 fused_ordering(25) 00:12:37.441 fused_ordering(26) 00:12:37.441 fused_ordering(27) 00:12:37.441 fused_ordering(28) 00:12:37.441 fused_ordering(29) 00:12:37.441 fused_ordering(30) 00:12:37.441 fused_ordering(31) 00:12:37.441 fused_ordering(32) 00:12:37.441 fused_ordering(33) 00:12:37.441 fused_ordering(34) 00:12:37.441 fused_ordering(35) 00:12:37.441 fused_ordering(36) 00:12:37.441 fused_ordering(37) 00:12:37.441 fused_ordering(38) 00:12:37.441 fused_ordering(39) 00:12:37.441 fused_ordering(40) 00:12:37.441 fused_ordering(41) 00:12:37.441 fused_ordering(42) 00:12:37.441 fused_ordering(43) 00:12:37.441 fused_ordering(44) 00:12:37.441 fused_ordering(45) 00:12:37.441 fused_ordering(46) 00:12:37.441 fused_ordering(47) 00:12:37.441 fused_ordering(48) 00:12:37.441 fused_ordering(49) 00:12:37.441 fused_ordering(50) 00:12:37.441 fused_ordering(51) 00:12:37.441 fused_ordering(52) 00:12:37.441 fused_ordering(53) 00:12:37.441 fused_ordering(54) 00:12:37.441 fused_ordering(55) 00:12:37.441 fused_ordering(56) 00:12:37.441 fused_ordering(57) 00:12:37.441 fused_ordering(58) 00:12:37.441 fused_ordering(59) 00:12:37.441 fused_ordering(60) 00:12:37.441 fused_ordering(61) 00:12:37.441 fused_ordering(62) 00:12:37.441 fused_ordering(63) 00:12:37.441 fused_ordering(64) 00:12:37.441 fused_ordering(65) 00:12:37.441 fused_ordering(66) 00:12:37.441 fused_ordering(67) 00:12:37.441 fused_ordering(68) 00:12:37.441 fused_ordering(69) 00:12:37.441 fused_ordering(70) 00:12:37.441 fused_ordering(71) 00:12:37.441 fused_ordering(72) 00:12:37.441 fused_ordering(73) 00:12:37.441 fused_ordering(74) 00:12:37.441 fused_ordering(75) 00:12:37.441 fused_ordering(76) 00:12:37.441 fused_ordering(77) 00:12:37.441 fused_ordering(78) 00:12:37.441 fused_ordering(79) 00:12:37.441 fused_ordering(80) 00:12:37.441 fused_ordering(81) 00:12:37.441 fused_ordering(82) 00:12:37.441 fused_ordering(83) 00:12:37.441 fused_ordering(84) 00:12:37.441 fused_ordering(85) 00:12:37.441 fused_ordering(86) 00:12:37.441 fused_ordering(87) 00:12:37.441 fused_ordering(88) 00:12:37.441 fused_ordering(89) 00:12:37.441 fused_ordering(90) 00:12:37.441 fused_ordering(91) 00:12:37.441 fused_ordering(92) 00:12:37.441 fused_ordering(93) 00:12:37.441 fused_ordering(94) 00:12:37.441 fused_ordering(95) 00:12:37.441 fused_ordering(96) 00:12:37.441 fused_ordering(97) 00:12:37.441 fused_ordering(98) 00:12:37.441 fused_ordering(99) 00:12:37.441 fused_ordering(100) 00:12:37.441 fused_ordering(101) 00:12:37.441 fused_ordering(102) 00:12:37.441 fused_ordering(103) 00:12:37.441 fused_ordering(104) 00:12:37.441 fused_ordering(105) 00:12:37.441 fused_ordering(106) 00:12:37.441 fused_ordering(107) 00:12:37.441 fused_ordering(108) 00:12:37.441 fused_ordering(109) 00:12:37.441 fused_ordering(110) 00:12:37.441 fused_ordering(111) 00:12:37.441 fused_ordering(112) 00:12:37.441 fused_ordering(113) 00:12:37.441 fused_ordering(114) 00:12:37.441 fused_ordering(115) 00:12:37.441 fused_ordering(116) 00:12:37.441 fused_ordering(117) 00:12:37.441 fused_ordering(118) 00:12:37.441 fused_ordering(119) 00:12:37.441 fused_ordering(120) 00:12:37.441 fused_ordering(121) 00:12:37.441 fused_ordering(122) 00:12:37.441 fused_ordering(123) 00:12:37.441 fused_ordering(124) 00:12:37.441 fused_ordering(125) 00:12:37.441 fused_ordering(126) 00:12:37.441 fused_ordering(127) 00:12:37.441 fused_ordering(128) 00:12:37.441 fused_ordering(129) 00:12:37.441 fused_ordering(130) 00:12:37.441 fused_ordering(131) 00:12:37.441 fused_ordering(132) 00:12:37.441 fused_ordering(133) 00:12:37.441 fused_ordering(134) 00:12:37.441 fused_ordering(135) 00:12:37.441 fused_ordering(136) 00:12:37.441 fused_ordering(137) 00:12:37.441 fused_ordering(138) 00:12:37.441 fused_ordering(139) 00:12:37.441 fused_ordering(140) 00:12:37.441 fused_ordering(141) 00:12:37.441 fused_ordering(142) 00:12:37.441 fused_ordering(143) 00:12:37.441 fused_ordering(144) 00:12:37.441 fused_ordering(145) 00:12:37.441 fused_ordering(146) 00:12:37.441 fused_ordering(147) 00:12:37.441 fused_ordering(148) 00:12:37.441 fused_ordering(149) 00:12:37.441 fused_ordering(150) 00:12:37.441 fused_ordering(151) 00:12:37.441 fused_ordering(152) 00:12:37.441 fused_ordering(153) 00:12:37.441 fused_ordering(154) 00:12:37.441 fused_ordering(155) 00:12:37.441 fused_ordering(156) 00:12:37.441 fused_ordering(157) 00:12:37.441 fused_ordering(158) 00:12:37.441 fused_ordering(159) 00:12:37.441 fused_ordering(160) 00:12:37.441 fused_ordering(161) 00:12:37.441 fused_ordering(162) 00:12:37.441 fused_ordering(163) 00:12:37.441 fused_ordering(164) 00:12:37.441 fused_ordering(165) 00:12:37.441 fused_ordering(166) 00:12:37.441 fused_ordering(167) 00:12:37.441 fused_ordering(168) 00:12:37.441 fused_ordering(169) 00:12:37.441 fused_ordering(170) 00:12:37.441 fused_ordering(171) 00:12:37.441 fused_ordering(172) 00:12:37.441 fused_ordering(173) 00:12:37.441 fused_ordering(174) 00:12:37.441 fused_ordering(175) 00:12:37.441 fused_ordering(176) 00:12:37.441 fused_ordering(177) 00:12:37.441 fused_ordering(178) 00:12:37.441 fused_ordering(179) 00:12:37.441 fused_ordering(180) 00:12:37.441 fused_ordering(181) 00:12:37.441 fused_ordering(182) 00:12:37.441 fused_ordering(183) 00:12:37.441 fused_ordering(184) 00:12:37.441 fused_ordering(185) 00:12:37.441 fused_ordering(186) 00:12:37.441 fused_ordering(187) 00:12:37.441 fused_ordering(188) 00:12:37.441 fused_ordering(189) 00:12:37.441 fused_ordering(190) 00:12:37.441 fused_ordering(191) 00:12:37.441 fused_ordering(192) 00:12:37.441 fused_ordering(193) 00:12:37.441 fused_ordering(194) 00:12:37.441 fused_ordering(195) 00:12:37.441 fused_ordering(196) 00:12:37.441 fused_ordering(197) 00:12:37.441 fused_ordering(198) 00:12:37.441 fused_ordering(199) 00:12:37.441 fused_ordering(200) 00:12:37.441 fused_ordering(201) 00:12:37.441 fused_ordering(202) 00:12:37.441 fused_ordering(203) 00:12:37.441 fused_ordering(204) 00:12:37.441 fused_ordering(205) 00:12:37.700 fused_ordering(206) 00:12:37.700 fused_ordering(207) 00:12:37.700 fused_ordering(208) 00:12:37.700 fused_ordering(209) 00:12:37.700 fused_ordering(210) 00:12:37.700 fused_ordering(211) 00:12:37.700 fused_ordering(212) 00:12:37.700 fused_ordering(213) 00:12:37.700 fused_ordering(214) 00:12:37.700 fused_ordering(215) 00:12:37.700 fused_ordering(216) 00:12:37.700 fused_ordering(217) 00:12:37.700 fused_ordering(218) 00:12:37.700 fused_ordering(219) 00:12:37.700 fused_ordering(220) 00:12:37.700 fused_ordering(221) 00:12:37.700 fused_ordering(222) 00:12:37.700 fused_ordering(223) 00:12:37.700 fused_ordering(224) 00:12:37.700 fused_ordering(225) 00:12:37.700 fused_ordering(226) 00:12:37.700 fused_ordering(227) 00:12:37.700 fused_ordering(228) 00:12:37.700 fused_ordering(229) 00:12:37.700 fused_ordering(230) 00:12:37.700 fused_ordering(231) 00:12:37.700 fused_ordering(232) 00:12:37.700 fused_ordering(233) 00:12:37.700 fused_ordering(234) 00:12:37.700 fused_ordering(235) 00:12:37.700 fused_ordering(236) 00:12:37.700 fused_ordering(237) 00:12:37.700 fused_ordering(238) 00:12:37.700 fused_ordering(239) 00:12:37.700 fused_ordering(240) 00:12:37.700 fused_ordering(241) 00:12:37.700 fused_ordering(242) 00:12:37.700 fused_ordering(243) 00:12:37.700 fused_ordering(244) 00:12:37.700 fused_ordering(245) 00:12:37.700 fused_ordering(246) 00:12:37.700 fused_ordering(247) 00:12:37.700 fused_ordering(248) 00:12:37.700 fused_ordering(249) 00:12:37.700 fused_ordering(250) 00:12:37.700 fused_ordering(251) 00:12:37.700 fused_ordering(252) 00:12:37.700 fused_ordering(253) 00:12:37.700 fused_ordering(254) 00:12:37.700 fused_ordering(255) 00:12:37.700 fused_ordering(256) 00:12:37.700 fused_ordering(257) 00:12:37.700 fused_ordering(258) 00:12:37.700 fused_ordering(259) 00:12:37.700 fused_ordering(260) 00:12:37.700 fused_ordering(261) 00:12:37.700 fused_ordering(262) 00:12:37.700 fused_ordering(263) 00:12:37.700 fused_ordering(264) 00:12:37.700 fused_ordering(265) 00:12:37.700 fused_ordering(266) 00:12:37.700 fused_ordering(267) 00:12:37.700 fused_ordering(268) 00:12:37.700 fused_ordering(269) 00:12:37.700 fused_ordering(270) 00:12:37.700 fused_ordering(271) 00:12:37.700 fused_ordering(272) 00:12:37.700 fused_ordering(273) 00:12:37.700 fused_ordering(274) 00:12:37.700 fused_ordering(275) 00:12:37.700 fused_ordering(276) 00:12:37.700 fused_ordering(277) 00:12:37.700 fused_ordering(278) 00:12:37.700 fused_ordering(279) 00:12:37.700 fused_ordering(280) 00:12:37.700 fused_ordering(281) 00:12:37.700 fused_ordering(282) 00:12:37.700 fused_ordering(283) 00:12:37.700 fused_ordering(284) 00:12:37.700 fused_ordering(285) 00:12:37.700 fused_ordering(286) 00:12:37.700 fused_ordering(287) 00:12:37.700 fused_ordering(288) 00:12:37.700 fused_ordering(289) 00:12:37.700 fused_ordering(290) 00:12:37.700 fused_ordering(291) 00:12:37.700 fused_ordering(292) 00:12:37.700 fused_ordering(293) 00:12:37.700 fused_ordering(294) 00:12:37.700 fused_ordering(295) 00:12:37.700 fused_ordering(296) 00:12:37.700 fused_ordering(297) 00:12:37.700 fused_ordering(298) 00:12:37.700 fused_ordering(299) 00:12:37.700 fused_ordering(300) 00:12:37.700 fused_ordering(301) 00:12:37.700 fused_ordering(302) 00:12:37.700 fused_ordering(303) 00:12:37.700 fused_ordering(304) 00:12:37.700 fused_ordering(305) 00:12:37.700 fused_ordering(306) 00:12:37.700 fused_ordering(307) 00:12:37.700 fused_ordering(308) 00:12:37.700 fused_ordering(309) 00:12:37.700 fused_ordering(310) 00:12:37.700 fused_ordering(311) 00:12:37.700 fused_ordering(312) 00:12:37.700 fused_ordering(313) 00:12:37.700 fused_ordering(314) 00:12:37.700 fused_ordering(315) 00:12:37.700 fused_ordering(316) 00:12:37.700 fused_ordering(317) 00:12:37.700 fused_ordering(318) 00:12:37.700 fused_ordering(319) 00:12:37.700 fused_ordering(320) 00:12:37.700 fused_ordering(321) 00:12:37.700 fused_ordering(322) 00:12:37.700 fused_ordering(323) 00:12:37.700 fused_ordering(324) 00:12:37.700 fused_ordering(325) 00:12:37.700 fused_ordering(326) 00:12:37.700 fused_ordering(327) 00:12:37.700 fused_ordering(328) 00:12:37.700 fused_ordering(329) 00:12:37.700 fused_ordering(330) 00:12:37.700 fused_ordering(331) 00:12:37.700 fused_ordering(332) 00:12:37.700 fused_ordering(333) 00:12:37.700 fused_ordering(334) 00:12:37.700 fused_ordering(335) 00:12:37.700 fused_ordering(336) 00:12:37.700 fused_ordering(337) 00:12:37.700 fused_ordering(338) 00:12:37.700 fused_ordering(339) 00:12:37.700 fused_ordering(340) 00:12:37.700 fused_ordering(341) 00:12:37.700 fused_ordering(342) 00:12:37.700 fused_ordering(343) 00:12:37.701 fused_ordering(344) 00:12:37.701 fused_ordering(345) 00:12:37.701 fused_ordering(346) 00:12:37.701 fused_ordering(347) 00:12:37.701 fused_ordering(348) 00:12:37.701 fused_ordering(349) 00:12:37.701 fused_ordering(350) 00:12:37.701 fused_ordering(351) 00:12:37.701 fused_ordering(352) 00:12:37.701 fused_ordering(353) 00:12:37.701 fused_ordering(354) 00:12:37.701 fused_ordering(355) 00:12:37.701 fused_ordering(356) 00:12:37.701 fused_ordering(357) 00:12:37.701 fused_ordering(358) 00:12:37.701 fused_ordering(359) 00:12:37.701 fused_ordering(360) 00:12:37.701 fused_ordering(361) 00:12:37.701 fused_ordering(362) 00:12:37.701 fused_ordering(363) 00:12:37.701 fused_ordering(364) 00:12:37.701 fused_ordering(365) 00:12:37.701 fused_ordering(366) 00:12:37.701 fused_ordering(367) 00:12:37.701 fused_ordering(368) 00:12:37.701 fused_ordering(369) 00:12:37.701 fused_ordering(370) 00:12:37.701 fused_ordering(371) 00:12:37.701 fused_ordering(372) 00:12:37.701 fused_ordering(373) 00:12:37.701 fused_ordering(374) 00:12:37.701 fused_ordering(375) 00:12:37.701 fused_ordering(376) 00:12:37.701 fused_ordering(377) 00:12:37.701 fused_ordering(378) 00:12:37.701 fused_ordering(379) 00:12:37.701 fused_ordering(380) 00:12:37.701 fused_ordering(381) 00:12:37.701 fused_ordering(382) 00:12:37.701 fused_ordering(383) 00:12:37.701 fused_ordering(384) 00:12:37.701 fused_ordering(385) 00:12:37.701 fused_ordering(386) 00:12:37.701 fused_ordering(387) 00:12:37.701 fused_ordering(388) 00:12:37.701 fused_ordering(389) 00:12:37.701 fused_ordering(390) 00:12:37.701 fused_ordering(391) 00:12:37.701 fused_ordering(392) 00:12:37.701 fused_ordering(393) 00:12:37.701 fused_ordering(394) 00:12:37.701 fused_ordering(395) 00:12:37.701 fused_ordering(396) 00:12:37.701 fused_ordering(397) 00:12:37.701 fused_ordering(398) 00:12:37.701 fused_ordering(399) 00:12:37.701 fused_ordering(400) 00:12:37.701 fused_ordering(401) 00:12:37.701 fused_ordering(402) 00:12:37.701 fused_ordering(403) 00:12:37.701 fused_ordering(404) 00:12:37.701 fused_ordering(405) 00:12:37.701 fused_ordering(406) 00:12:37.701 fused_ordering(407) 00:12:37.701 fused_ordering(408) 00:12:37.701 fused_ordering(409) 00:12:37.701 fused_ordering(410) 00:12:38.267 fused_ordering(411) 00:12:38.267 fused_ordering(412) 00:12:38.267 fused_ordering(413) 00:12:38.267 fused_ordering(414) 00:12:38.267 fused_ordering(415) 00:12:38.267 fused_ordering(416) 00:12:38.267 fused_ordering(417) 00:12:38.267 fused_ordering(418) 00:12:38.267 fused_ordering(419) 00:12:38.267 fused_ordering(420) 00:12:38.267 fused_ordering(421) 00:12:38.267 fused_ordering(422) 00:12:38.267 fused_ordering(423) 00:12:38.267 fused_ordering(424) 00:12:38.267 fused_ordering(425) 00:12:38.267 fused_ordering(426) 00:12:38.267 fused_ordering(427) 00:12:38.267 fused_ordering(428) 00:12:38.267 fused_ordering(429) 00:12:38.267 fused_ordering(430) 00:12:38.267 fused_ordering(431) 00:12:38.267 fused_ordering(432) 00:12:38.267 fused_ordering(433) 00:12:38.267 fused_ordering(434) 00:12:38.267 fused_ordering(435) 00:12:38.267 fused_ordering(436) 00:12:38.267 fused_ordering(437) 00:12:38.267 fused_ordering(438) 00:12:38.267 fused_ordering(439) 00:12:38.267 fused_ordering(440) 00:12:38.267 fused_ordering(441) 00:12:38.267 fused_ordering(442) 00:12:38.267 fused_ordering(443) 00:12:38.267 fused_ordering(444) 00:12:38.267 fused_ordering(445) 00:12:38.267 fused_ordering(446) 00:12:38.267 fused_ordering(447) 00:12:38.267 fused_ordering(448) 00:12:38.267 fused_ordering(449) 00:12:38.267 fused_ordering(450) 00:12:38.267 fused_ordering(451) 00:12:38.267 fused_ordering(452) 00:12:38.267 fused_ordering(453) 00:12:38.267 fused_ordering(454) 00:12:38.267 fused_ordering(455) 00:12:38.267 fused_ordering(456) 00:12:38.267 fused_ordering(457) 00:12:38.267 fused_ordering(458) 00:12:38.267 fused_ordering(459) 00:12:38.267 fused_ordering(460) 00:12:38.267 fused_ordering(461) 00:12:38.267 fused_ordering(462) 00:12:38.267 fused_ordering(463) 00:12:38.267 fused_ordering(464) 00:12:38.267 fused_ordering(465) 00:12:38.267 fused_ordering(466) 00:12:38.267 fused_ordering(467) 00:12:38.267 fused_ordering(468) 00:12:38.267 fused_ordering(469) 00:12:38.267 fused_ordering(470) 00:12:38.267 fused_ordering(471) 00:12:38.267 fused_ordering(472) 00:12:38.267 fused_ordering(473) 00:12:38.267 fused_ordering(474) 00:12:38.267 fused_ordering(475) 00:12:38.267 fused_ordering(476) 00:12:38.267 fused_ordering(477) 00:12:38.267 fused_ordering(478) 00:12:38.267 fused_ordering(479) 00:12:38.267 fused_ordering(480) 00:12:38.267 fused_ordering(481) 00:12:38.267 fused_ordering(482) 00:12:38.267 fused_ordering(483) 00:12:38.267 fused_ordering(484) 00:12:38.267 fused_ordering(485) 00:12:38.267 fused_ordering(486) 00:12:38.267 fused_ordering(487) 00:12:38.267 fused_ordering(488) 00:12:38.267 fused_ordering(489) 00:12:38.267 fused_ordering(490) 00:12:38.267 fused_ordering(491) 00:12:38.267 fused_ordering(492) 00:12:38.267 fused_ordering(493) 00:12:38.267 fused_ordering(494) 00:12:38.267 fused_ordering(495) 00:12:38.267 fused_ordering(496) 00:12:38.267 fused_ordering(497) 00:12:38.267 fused_ordering(498) 00:12:38.267 fused_ordering(499) 00:12:38.267 fused_ordering(500) 00:12:38.267 fused_ordering(501) 00:12:38.267 fused_ordering(502) 00:12:38.267 fused_ordering(503) 00:12:38.267 fused_ordering(504) 00:12:38.267 fused_ordering(505) 00:12:38.267 fused_ordering(506) 00:12:38.267 fused_ordering(507) 00:12:38.267 fused_ordering(508) 00:12:38.267 fused_ordering(509) 00:12:38.267 fused_ordering(510) 00:12:38.267 fused_ordering(511) 00:12:38.267 fused_ordering(512) 00:12:38.267 fused_ordering(513) 00:12:38.267 fused_ordering(514) 00:12:38.267 fused_ordering(515) 00:12:38.267 fused_ordering(516) 00:12:38.267 fused_ordering(517) 00:12:38.267 fused_ordering(518) 00:12:38.267 fused_ordering(519) 00:12:38.267 fused_ordering(520) 00:12:38.267 fused_ordering(521) 00:12:38.267 fused_ordering(522) 00:12:38.267 fused_ordering(523) 00:12:38.267 fused_ordering(524) 00:12:38.267 fused_ordering(525) 00:12:38.267 fused_ordering(526) 00:12:38.267 fused_ordering(527) 00:12:38.267 fused_ordering(528) 00:12:38.267 fused_ordering(529) 00:12:38.267 fused_ordering(530) 00:12:38.267 fused_ordering(531) 00:12:38.267 fused_ordering(532) 00:12:38.267 fused_ordering(533) 00:12:38.267 fused_ordering(534) 00:12:38.267 fused_ordering(535) 00:12:38.267 fused_ordering(536) 00:12:38.267 fused_ordering(537) 00:12:38.267 fused_ordering(538) 00:12:38.267 fused_ordering(539) 00:12:38.267 fused_ordering(540) 00:12:38.267 fused_ordering(541) 00:12:38.267 fused_ordering(542) 00:12:38.267 fused_ordering(543) 00:12:38.267 fused_ordering(544) 00:12:38.267 fused_ordering(545) 00:12:38.267 fused_ordering(546) 00:12:38.267 fused_ordering(547) 00:12:38.267 fused_ordering(548) 00:12:38.267 fused_ordering(549) 00:12:38.267 fused_ordering(550) 00:12:38.267 fused_ordering(551) 00:12:38.267 fused_ordering(552) 00:12:38.267 fused_ordering(553) 00:12:38.267 fused_ordering(554) 00:12:38.267 fused_ordering(555) 00:12:38.267 fused_ordering(556) 00:12:38.267 fused_ordering(557) 00:12:38.267 fused_ordering(558) 00:12:38.267 fused_ordering(559) 00:12:38.267 fused_ordering(560) 00:12:38.267 fused_ordering(561) 00:12:38.267 fused_ordering(562) 00:12:38.267 fused_ordering(563) 00:12:38.267 fused_ordering(564) 00:12:38.267 fused_ordering(565) 00:12:38.267 fused_ordering(566) 00:12:38.267 fused_ordering(567) 00:12:38.267 fused_ordering(568) 00:12:38.267 fused_ordering(569) 00:12:38.267 fused_ordering(570) 00:12:38.267 fused_ordering(571) 00:12:38.267 fused_ordering(572) 00:12:38.267 fused_ordering(573) 00:12:38.267 fused_ordering(574) 00:12:38.267 fused_ordering(575) 00:12:38.267 fused_ordering(576) 00:12:38.268 fused_ordering(577) 00:12:38.268 fused_ordering(578) 00:12:38.268 fused_ordering(579) 00:12:38.268 fused_ordering(580) 00:12:38.268 fused_ordering(581) 00:12:38.268 fused_ordering(582) 00:12:38.268 fused_ordering(583) 00:12:38.268 fused_ordering(584) 00:12:38.268 fused_ordering(585) 00:12:38.268 fused_ordering(586) 00:12:38.268 fused_ordering(587) 00:12:38.268 fused_ordering(588) 00:12:38.268 fused_ordering(589) 00:12:38.268 fused_ordering(590) 00:12:38.268 fused_ordering(591) 00:12:38.268 fused_ordering(592) 00:12:38.268 fused_ordering(593) 00:12:38.268 fused_ordering(594) 00:12:38.268 fused_ordering(595) 00:12:38.268 fused_ordering(596) 00:12:38.268 fused_ordering(597) 00:12:38.268 fused_ordering(598) 00:12:38.268 fused_ordering(599) 00:12:38.268 fused_ordering(600) 00:12:38.268 fused_ordering(601) 00:12:38.268 fused_ordering(602) 00:12:38.268 fused_ordering(603) 00:12:38.268 fused_ordering(604) 00:12:38.268 fused_ordering(605) 00:12:38.268 fused_ordering(606) 00:12:38.268 fused_ordering(607) 00:12:38.268 fused_ordering(608) 00:12:38.268 fused_ordering(609) 00:12:38.268 fused_ordering(610) 00:12:38.268 fused_ordering(611) 00:12:38.268 fused_ordering(612) 00:12:38.268 fused_ordering(613) 00:12:38.268 fused_ordering(614) 00:12:38.268 fused_ordering(615) 00:12:38.526 fused_ordering(616) 00:12:38.526 fused_ordering(617) 00:12:38.526 fused_ordering(618) 00:12:38.526 fused_ordering(619) 00:12:38.526 fused_ordering(620) 00:12:38.526 fused_ordering(621) 00:12:38.526 fused_ordering(622) 00:12:38.526 fused_ordering(623) 00:12:38.526 fused_ordering(624) 00:12:38.526 fused_ordering(625) 00:12:38.526 fused_ordering(626) 00:12:38.526 fused_ordering(627) 00:12:38.526 fused_ordering(628) 00:12:38.526 fused_ordering(629) 00:12:38.526 fused_ordering(630) 00:12:38.526 fused_ordering(631) 00:12:38.526 fused_ordering(632) 00:12:38.526 fused_ordering(633) 00:12:38.526 fused_ordering(634) 00:12:38.526 fused_ordering(635) 00:12:38.526 fused_ordering(636) 00:12:38.526 fused_ordering(637) 00:12:38.526 fused_ordering(638) 00:12:38.526 fused_ordering(639) 00:12:38.526 fused_ordering(640) 00:12:38.526 fused_ordering(641) 00:12:38.526 fused_ordering(642) 00:12:38.526 fused_ordering(643) 00:12:38.526 fused_ordering(644) 00:12:38.526 fused_ordering(645) 00:12:38.526 fused_ordering(646) 00:12:38.526 fused_ordering(647) 00:12:38.526 fused_ordering(648) 00:12:38.526 fused_ordering(649) 00:12:38.526 fused_ordering(650) 00:12:38.526 fused_ordering(651) 00:12:38.526 fused_ordering(652) 00:12:38.526 fused_ordering(653) 00:12:38.526 fused_ordering(654) 00:12:38.526 fused_ordering(655) 00:12:38.526 fused_ordering(656) 00:12:38.526 fused_ordering(657) 00:12:38.526 fused_ordering(658) 00:12:38.526 fused_ordering(659) 00:12:38.526 fused_ordering(660) 00:12:38.526 fused_ordering(661) 00:12:38.526 fused_ordering(662) 00:12:38.526 fused_ordering(663) 00:12:38.526 fused_ordering(664) 00:12:38.526 fused_ordering(665) 00:12:38.526 fused_ordering(666) 00:12:38.526 fused_ordering(667) 00:12:38.526 fused_ordering(668) 00:12:38.526 fused_ordering(669) 00:12:38.526 fused_ordering(670) 00:12:38.526 fused_ordering(671) 00:12:38.526 fused_ordering(672) 00:12:38.526 fused_ordering(673) 00:12:38.526 fused_ordering(674) 00:12:38.526 fused_ordering(675) 00:12:38.526 fused_ordering(676) 00:12:38.526 fused_ordering(677) 00:12:38.526 fused_ordering(678) 00:12:38.526 fused_ordering(679) 00:12:38.526 fused_ordering(680) 00:12:38.526 fused_ordering(681) 00:12:38.526 fused_ordering(682) 00:12:38.526 fused_ordering(683) 00:12:38.526 fused_ordering(684) 00:12:38.526 fused_ordering(685) 00:12:38.526 fused_ordering(686) 00:12:38.526 fused_ordering(687) 00:12:38.526 fused_ordering(688) 00:12:38.526 fused_ordering(689) 00:12:38.526 fused_ordering(690) 00:12:38.526 fused_ordering(691) 00:12:38.526 fused_ordering(692) 00:12:38.526 fused_ordering(693) 00:12:38.526 fused_ordering(694) 00:12:38.526 fused_ordering(695) 00:12:38.526 fused_ordering(696) 00:12:38.526 fused_ordering(697) 00:12:38.526 fused_ordering(698) 00:12:38.526 fused_ordering(699) 00:12:38.526 fused_ordering(700) 00:12:38.526 fused_ordering(701) 00:12:38.526 fused_ordering(702) 00:12:38.526 fused_ordering(703) 00:12:38.526 fused_ordering(704) 00:12:38.526 fused_ordering(705) 00:12:38.526 fused_ordering(706) 00:12:38.526 fused_ordering(707) 00:12:38.526 fused_ordering(708) 00:12:38.526 fused_ordering(709) 00:12:38.526 fused_ordering(710) 00:12:38.526 fused_ordering(711) 00:12:38.526 fused_ordering(712) 00:12:38.526 fused_ordering(713) 00:12:38.526 fused_ordering(714) 00:12:38.526 fused_ordering(715) 00:12:38.526 fused_ordering(716) 00:12:38.526 fused_ordering(717) 00:12:38.526 fused_ordering(718) 00:12:38.526 fused_ordering(719) 00:12:38.526 fused_ordering(720) 00:12:38.526 fused_ordering(721) 00:12:38.526 fused_ordering(722) 00:12:38.526 fused_ordering(723) 00:12:38.526 fused_ordering(724) 00:12:38.526 fused_ordering(725) 00:12:38.526 fused_ordering(726) 00:12:38.526 fused_ordering(727) 00:12:38.526 fused_ordering(728) 00:12:38.526 fused_ordering(729) 00:12:38.526 fused_ordering(730) 00:12:38.526 fused_ordering(731) 00:12:38.526 fused_ordering(732) 00:12:38.526 fused_ordering(733) 00:12:38.526 fused_ordering(734) 00:12:38.526 fused_ordering(735) 00:12:38.526 fused_ordering(736) 00:12:38.526 fused_ordering(737) 00:12:38.526 fused_ordering(738) 00:12:38.526 fused_ordering(739) 00:12:38.526 fused_ordering(740) 00:12:38.526 fused_ordering(741) 00:12:38.526 fused_ordering(742) 00:12:38.526 fused_ordering(743) 00:12:38.526 fused_ordering(744) 00:12:38.526 fused_ordering(745) 00:12:38.526 fused_ordering(746) 00:12:38.526 fused_ordering(747) 00:12:38.526 fused_ordering(748) 00:12:38.526 fused_ordering(749) 00:12:38.526 fused_ordering(750) 00:12:38.526 fused_ordering(751) 00:12:38.526 fused_ordering(752) 00:12:38.526 fused_ordering(753) 00:12:38.526 fused_ordering(754) 00:12:38.526 fused_ordering(755) 00:12:38.526 fused_ordering(756) 00:12:38.526 fused_ordering(757) 00:12:38.526 fused_ordering(758) 00:12:38.526 fused_ordering(759) 00:12:38.526 fused_ordering(760) 00:12:38.526 fused_ordering(761) 00:12:38.526 fused_ordering(762) 00:12:38.526 fused_ordering(763) 00:12:38.526 fused_ordering(764) 00:12:38.526 fused_ordering(765) 00:12:38.526 fused_ordering(766) 00:12:38.526 fused_ordering(767) 00:12:38.526 fused_ordering(768) 00:12:38.526 fused_ordering(769) 00:12:38.526 fused_ordering(770) 00:12:38.526 fused_ordering(771) 00:12:38.526 fused_ordering(772) 00:12:38.526 fused_ordering(773) 00:12:38.526 fused_ordering(774) 00:12:38.526 fused_ordering(775) 00:12:38.526 fused_ordering(776) 00:12:38.526 fused_ordering(777) 00:12:38.526 fused_ordering(778) 00:12:38.526 fused_ordering(779) 00:12:38.526 fused_ordering(780) 00:12:38.526 fused_ordering(781) 00:12:38.526 fused_ordering(782) 00:12:38.526 fused_ordering(783) 00:12:38.526 fused_ordering(784) 00:12:38.526 fused_ordering(785) 00:12:38.526 fused_ordering(786) 00:12:38.526 fused_ordering(787) 00:12:38.526 fused_ordering(788) 00:12:38.526 fused_ordering(789) 00:12:38.526 fused_ordering(790) 00:12:38.526 fused_ordering(791) 00:12:38.526 fused_ordering(792) 00:12:38.526 fused_ordering(793) 00:12:38.526 fused_ordering(794) 00:12:38.526 fused_ordering(795) 00:12:38.526 fused_ordering(796) 00:12:38.526 fused_ordering(797) 00:12:38.526 fused_ordering(798) 00:12:38.526 fused_ordering(799) 00:12:38.526 fused_ordering(800) 00:12:38.527 fused_ordering(801) 00:12:38.527 fused_ordering(802) 00:12:38.527 fused_ordering(803) 00:12:38.527 fused_ordering(804) 00:12:38.527 fused_ordering(805) 00:12:38.527 fused_ordering(806) 00:12:38.527 fused_ordering(807) 00:12:38.527 fused_ordering(808) 00:12:38.527 fused_ordering(809) 00:12:38.527 fused_ordering(810) 00:12:38.527 fused_ordering(811) 00:12:38.527 fused_ordering(812) 00:12:38.527 fused_ordering(813) 00:12:38.527 fused_ordering(814) 00:12:38.527 fused_ordering(815) 00:12:38.527 fused_ordering(816) 00:12:38.527 fused_ordering(817) 00:12:38.527 fused_ordering(818) 00:12:38.527 fused_ordering(819) 00:12:38.527 fused_ordering(820) 00:12:39.093 fused_ordering(821) 00:12:39.093 fused_ordering(822) 00:12:39.093 fused_ordering(823) 00:12:39.093 fused_ordering(824) 00:12:39.093 fused_ordering(825) 00:12:39.093 fused_ordering(826) 00:12:39.093 fused_ordering(827) 00:12:39.093 fused_ordering(828) 00:12:39.093 fused_ordering(829) 00:12:39.094 fused_ordering(830) 00:12:39.094 fused_ordering(831) 00:12:39.094 fused_ordering(832) 00:12:39.094 fused_ordering(833) 00:12:39.094 fused_ordering(834) 00:12:39.094 fused_ordering(835) 00:12:39.094 fused_ordering(836) 00:12:39.094 fused_ordering(837) 00:12:39.094 fused_ordering(838) 00:12:39.094 fused_ordering(839) 00:12:39.094 fused_ordering(840) 00:12:39.094 fused_ordering(841) 00:12:39.094 fused_ordering(842) 00:12:39.094 fused_ordering(843) 00:12:39.094 fused_ordering(844) 00:12:39.094 fused_ordering(845) 00:12:39.094 fused_ordering(846) 00:12:39.094 fused_ordering(847) 00:12:39.094 fused_ordering(848) 00:12:39.094 fused_ordering(849) 00:12:39.094 fused_ordering(850) 00:12:39.094 fused_ordering(851) 00:12:39.094 fused_ordering(852) 00:12:39.094 fused_ordering(853) 00:12:39.094 fused_ordering(854) 00:12:39.094 fused_ordering(855) 00:12:39.094 fused_ordering(856) 00:12:39.094 fused_ordering(857) 00:12:39.094 fused_ordering(858) 00:12:39.094 fused_ordering(859) 00:12:39.094 fused_ordering(860) 00:12:39.094 fused_ordering(861) 00:12:39.094 fused_ordering(862) 00:12:39.094 fused_ordering(863) 00:12:39.094 fused_ordering(864) 00:12:39.094 fused_ordering(865) 00:12:39.094 fused_ordering(866) 00:12:39.094 fused_ordering(867) 00:12:39.094 fused_ordering(868) 00:12:39.094 fused_ordering(869) 00:12:39.094 fused_ordering(870) 00:12:39.094 fused_ordering(871) 00:12:39.094 fused_ordering(872) 00:12:39.094 fused_ordering(873) 00:12:39.094 fused_ordering(874) 00:12:39.094 fused_ordering(875) 00:12:39.094 fused_ordering(876) 00:12:39.094 fused_ordering(877) 00:12:39.094 fused_ordering(878) 00:12:39.094 fused_ordering(879) 00:12:39.094 fused_ordering(880) 00:12:39.094 fused_ordering(881) 00:12:39.094 fused_ordering(882) 00:12:39.094 fused_ordering(883) 00:12:39.094 fused_ordering(884) 00:12:39.094 fused_ordering(885) 00:12:39.094 fused_ordering(886) 00:12:39.094 fused_ordering(887) 00:12:39.094 fused_ordering(888) 00:12:39.094 fused_ordering(889) 00:12:39.094 fused_ordering(890) 00:12:39.094 fused_ordering(891) 00:12:39.094 fused_ordering(892) 00:12:39.094 fused_ordering(893) 00:12:39.094 fused_ordering(894) 00:12:39.094 fused_ordering(895) 00:12:39.094 fused_ordering(896) 00:12:39.094 fused_ordering(897) 00:12:39.094 fused_ordering(898) 00:12:39.094 fused_ordering(899) 00:12:39.094 fused_ordering(900) 00:12:39.094 fused_ordering(901) 00:12:39.094 fused_ordering(902) 00:12:39.094 fused_ordering(903) 00:12:39.094 fused_ordering(904) 00:12:39.094 fused_ordering(905) 00:12:39.094 fused_ordering(906) 00:12:39.094 fused_ordering(907) 00:12:39.094 fused_ordering(908) 00:12:39.094 fused_ordering(909) 00:12:39.094 fused_ordering(910) 00:12:39.094 fused_ordering(911) 00:12:39.094 fused_ordering(912) 00:12:39.094 fused_ordering(913) 00:12:39.094 fused_ordering(914) 00:12:39.094 fused_ordering(915) 00:12:39.094 fused_ordering(916) 00:12:39.094 fused_ordering(917) 00:12:39.094 fused_ordering(918) 00:12:39.094 fused_ordering(919) 00:12:39.094 fused_ordering(920) 00:12:39.094 fused_ordering(921) 00:12:39.094 fused_ordering(922) 00:12:39.094 fused_ordering(923) 00:12:39.094 fused_ordering(924) 00:12:39.094 fused_ordering(925) 00:12:39.094 fused_ordering(926) 00:12:39.094 fused_ordering(927) 00:12:39.094 fused_ordering(928) 00:12:39.094 fused_ordering(929) 00:12:39.094 fused_ordering(930) 00:12:39.094 fused_ordering(931) 00:12:39.094 fused_ordering(932) 00:12:39.094 fused_ordering(933) 00:12:39.094 fused_ordering(934) 00:12:39.094 fused_ordering(935) 00:12:39.094 fused_ordering(936) 00:12:39.094 fused_ordering(937) 00:12:39.094 fused_ordering(938) 00:12:39.094 fused_ordering(939) 00:12:39.094 fused_ordering(940) 00:12:39.094 fused_ordering(941) 00:12:39.094 fused_ordering(942) 00:12:39.094 fused_ordering(943) 00:12:39.094 fused_ordering(944) 00:12:39.094 fused_ordering(945) 00:12:39.094 fused_ordering(946) 00:12:39.094 fused_ordering(947) 00:12:39.094 fused_ordering(948) 00:12:39.094 fused_ordering(949) 00:12:39.094 fused_ordering(950) 00:12:39.094 fused_ordering(951) 00:12:39.094 fused_ordering(952) 00:12:39.094 fused_ordering(953) 00:12:39.094 fused_ordering(954) 00:12:39.094 fused_ordering(955) 00:12:39.094 fused_ordering(956) 00:12:39.094 fused_ordering(957) 00:12:39.094 fused_ordering(958) 00:12:39.094 fused_ordering(959) 00:12:39.094 fused_ordering(960) 00:12:39.094 fused_ordering(961) 00:12:39.094 fused_ordering(962) 00:12:39.094 fused_ordering(963) 00:12:39.094 fused_ordering(964) 00:12:39.094 fused_ordering(965) 00:12:39.094 fused_ordering(966) 00:12:39.094 fused_ordering(967) 00:12:39.094 fused_ordering(968) 00:12:39.094 fused_ordering(969) 00:12:39.094 fused_ordering(970) 00:12:39.094 fused_ordering(971) 00:12:39.094 fused_ordering(972) 00:12:39.094 fused_ordering(973) 00:12:39.094 fused_ordering(974) 00:12:39.094 fused_ordering(975) 00:12:39.094 fused_ordering(976) 00:12:39.094 fused_ordering(977) 00:12:39.094 fused_ordering(978) 00:12:39.094 fused_ordering(979) 00:12:39.094 fused_ordering(980) 00:12:39.094 fused_ordering(981) 00:12:39.094 fused_ordering(982) 00:12:39.094 fused_ordering(983) 00:12:39.094 fused_ordering(984) 00:12:39.094 fused_ordering(985) 00:12:39.094 fused_ordering(986) 00:12:39.094 fused_ordering(987) 00:12:39.094 fused_ordering(988) 00:12:39.094 fused_ordering(989) 00:12:39.094 fused_ordering(990) 00:12:39.094 fused_ordering(991) 00:12:39.094 fused_ordering(992) 00:12:39.094 fused_ordering(993) 00:12:39.094 fused_ordering(994) 00:12:39.094 fused_ordering(995) 00:12:39.094 fused_ordering(996) 00:12:39.094 fused_ordering(997) 00:12:39.094 fused_ordering(998) 00:12:39.094 fused_ordering(999) 00:12:39.094 fused_ordering(1000) 00:12:39.094 fused_ordering(1001) 00:12:39.094 fused_ordering(1002) 00:12:39.094 fused_ordering(1003) 00:12:39.094 fused_ordering(1004) 00:12:39.094 fused_ordering(1005) 00:12:39.094 fused_ordering(1006) 00:12:39.094 fused_ordering(1007) 00:12:39.094 fused_ordering(1008) 00:12:39.094 fused_ordering(1009) 00:12:39.094 fused_ordering(1010) 00:12:39.094 fused_ordering(1011) 00:12:39.094 fused_ordering(1012) 00:12:39.094 fused_ordering(1013) 00:12:39.094 fused_ordering(1014) 00:12:39.094 fused_ordering(1015) 00:12:39.094 fused_ordering(1016) 00:12:39.094 fused_ordering(1017) 00:12:39.094 fused_ordering(1018) 00:12:39.094 fused_ordering(1019) 00:12:39.094 fused_ordering(1020) 00:12:39.094 fused_ordering(1021) 00:12:39.094 fused_ordering(1022) 00:12:39.094 fused_ordering(1023) 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:39.094 rmmod nvme_tcp 00:12:39.094 rmmod nvme_fabrics 00:12:39.094 rmmod nvme_keyring 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2633988 ']' 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2633988 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2633988 ']' 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2633988 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2633988 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2633988' 00:12:39.094 killing process with pid 2633988 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2633988 00:12:39.094 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2633988 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.353 10:41:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:41.280 00:12:41.280 real 0m10.399s 00:12:41.280 user 0m5.082s 00:12:41.280 sys 0m5.549s 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:41.280 ************************************ 00:12:41.280 END TEST nvmf_fused_ordering 00:12:41.280 ************************************ 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:41.280 10:41:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:41.539 ************************************ 00:12:41.539 START TEST nvmf_ns_masking 00:12:41.539 ************************************ 00:12:41.539 10:41:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:41.539 * Looking for test storage... 00:12:41.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.539 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.540 --rc genhtml_branch_coverage=1 00:12:41.540 --rc genhtml_function_coverage=1 00:12:41.540 --rc genhtml_legend=1 00:12:41.540 --rc geninfo_all_blocks=1 00:12:41.540 --rc geninfo_unexecuted_blocks=1 00:12:41.540 00:12:41.540 ' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=718ed20b-8c5e-4f4e-9599-a690b6ea0fbe 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=0e12555d-3737-4d64-8b73-284c45898222 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:41.540 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e8c5bac1-c4c2-4a1a-8236-27ffb54727a8 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:41.800 10:41:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.070 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:47.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:47.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:47.071 Found net devices under 0000:86:00.0: cvl_0_0 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:47.071 Found net devices under 0000:86:00.1: cvl_0_1 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.071 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:12:47.330 00:12:47.330 --- 10.0.0.2 ping statistics --- 00:12:47.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.330 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:12:47.330 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:12:47.330 00:12:47.330 --- 10.0.0.1 ping statistics --- 00:12:47.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.331 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2637898 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2637898 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2637898 ']' 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:47.331 10:41:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.331 [2024-11-07 10:41:14.944997] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:47.331 [2024-11-07 10:41:14.945045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.590 [2024-11-07 10:41:15.012216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.590 [2024-11-07 10:41:15.054379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.590 [2024-11-07 10:41:15.054413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.590 [2024-11-07 10:41:15.054421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.590 [2024-11-07 10:41:15.054427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.590 [2024-11-07 10:41:15.054436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.590 [2024-11-07 10:41:15.055003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.590 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:47.848 [2024-11-07 10:41:15.351230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.848 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:47.848 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:47.848 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:48.106 Malloc1 00:12:48.106 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:48.106 Malloc2 00:12:48.106 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:48.364 10:41:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:48.623 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.881 [2024-11-07 10:41:16.321979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8c5bac1-c4c2-4a1a-8236-27ffb54727a8 -a 10.0.0.2 -s 4420 -i 4 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:48.881 10:41:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.412 [ 0]:0x1 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.412 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56d2f7665eb943889a6dfa7fdbe2a7c7 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56d2f7665eb943889a6dfa7fdbe2a7c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.413 [ 0]:0x1 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56d2f7665eb943889a6dfa7fdbe2a7c7 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56d2f7665eb943889a6dfa7fdbe2a7c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.413 [ 1]:0x2 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:51.413 10:41:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.413 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.671 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8c5bac1-c4c2-4a1a-8236-27ffb54727a8 -a 10.0.0.2 -s 4420 -i 4 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:51.930 10:41:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:54.459 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.460 [ 0]:0x2 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.460 [ 0]:0x1 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56d2f7665eb943889a6dfa7fdbe2a7c7 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56d2f7665eb943889a6dfa7fdbe2a7c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.460 [ 1]:0x2 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.460 10:41:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.718 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.719 [ 0]:0x2 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:54.719 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.977 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e8c5bac1-c4c2-4a1a-8236-27ffb54727a8 -a 10.0.0.2 -s 4420 -i 4 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:12:55.235 10:41:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:57.763 10:41:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.763 [ 0]:0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=56d2f7665eb943889a6dfa7fdbe2a7c7 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 56d2f7665eb943889a6dfa7fdbe2a7c7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.763 [ 1]:0x2 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.763 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.021 [ 0]:0x2 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.021 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:58.022 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:58.022 [2024-11-07 10:41:25.672468] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:58.022 request: 00:12:58.022 { 00:12:58.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.022 "nsid": 2, 00:12:58.022 "host": "nqn.2016-06.io.spdk:host1", 00:12:58.022 "method": "nvmf_ns_remove_host", 00:12:58.022 "req_id": 1 00:12:58.022 } 00:12:58.022 Got JSON-RPC error response 00:12:58.022 response: 00:12:58.022 { 00:12:58.022 "code": -32602, 00:12:58.022 "message": "Invalid parameters" 00:12:58.022 } 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:58.280 [ 0]:0x2 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a89ac2cd7864abf9dd47c91d3b6d412 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a89ac2cd7864abf9dd47c91d3b6d412 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2639773 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2639773 /var/tmp/host.sock 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2639773 ']' 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:58.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.280 10:41:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:58.280 [2024-11-07 10:41:25.900252] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:12:58.280 [2024-11-07 10:41:25.900300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2639773 ] 00:12:58.539 [2024-11-07 10:41:25.964316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.539 [2024-11-07 10:41:26.008051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.797 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.797 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:58.797 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.797 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:59.055 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 718ed20b-8c5e-4f4e-9599-a690b6ea0fbe 00:12:59.055 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:59.055 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 718ED20B8C5E4F4E9599A690B6EA0FBE -i 00:12:59.313 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 0e12555d-3737-4d64-8b73-284c45898222 00:12:59.313 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:59.313 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 0E12555D37374D648B73284C45898222 -i 00:12:59.571 10:41:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:59.571 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:59.829 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:59.829 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:00.394 nvme0n1 00:13:00.394 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:00.394 10:41:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:00.651 nvme1n2 00:13:00.652 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:00.652 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:00.652 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:00.652 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:00.652 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:00.909 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:00.909 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:00.910 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:00.910 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 718ed20b-8c5e-4f4e-9599-a690b6ea0fbe == \7\1\8\e\d\2\0\b\-\8\c\5\e\-\4\f\4\e\-\9\5\9\9\-\a\6\9\0\b\6\e\a\0\f\b\e ]] 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 0e12555d-3737-4d64-8b73-284c45898222 == \0\e\1\2\5\5\5\d\-\3\7\3\7\-\4\d\6\4\-\8\b\7\3\-\2\8\4\c\4\5\8\9\8\2\2\2 ]] 00:13:01.168 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.426 10:41:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 718ed20b-8c5e-4f4e-9599-a690b6ea0fbe 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 718ED20B8C5E4F4E9599A690B6EA0FBE 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 718ED20B8C5E4F4E9599A690B6EA0FBE 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:01.684 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 718ED20B8C5E4F4E9599A690B6EA0FBE 00:13:01.684 [2024-11-07 10:41:29.350733] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:01.684 [2024-11-07 10:41:29.350767] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:01.684 [2024-11-07 10:41:29.350776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:01.942 request: 00:13:01.942 { 00:13:01.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:01.942 "namespace": { 00:13:01.942 "bdev_name": "invalid", 00:13:01.942 "nsid": 1, 00:13:01.942 "nguid": "718ED20B8C5E4F4E9599A690B6EA0FBE", 00:13:01.942 "no_auto_visible": false 00:13:01.942 }, 00:13:01.942 "method": "nvmf_subsystem_add_ns", 00:13:01.942 "req_id": 1 00:13:01.942 } 00:13:01.942 Got JSON-RPC error response 00:13:01.942 response: 00:13:01.942 { 00:13:01.942 "code": -32602, 00:13:01.942 "message": "Invalid parameters" 00:13:01.942 } 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 718ed20b-8c5e-4f4e-9599-a690b6ea0fbe 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 718ED20B8C5E4F4E9599A690B6EA0FBE -i 00:13:01.942 10:41:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2639773 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2639773 ']' 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2639773 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2639773 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2639773' 00:13:04.468 killing process with pid 2639773 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2639773 00:13:04.468 10:41:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2639773 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.726 rmmod nvme_tcp 00:13:04.726 rmmod nvme_fabrics 00:13:04.726 rmmod nvme_keyring 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2637898 ']' 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2637898 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2637898 ']' 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2637898 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:04.726 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2637898 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2637898' 00:13:04.986 killing process with pid 2637898 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2637898 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2637898 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.986 10:41:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.519 00:13:07.519 real 0m25.718s 00:13:07.519 user 0m30.883s 00:13:07.519 sys 0m6.812s 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.519 ************************************ 00:13:07.519 END TEST nvmf_ns_masking 00:13:07.519 ************************************ 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.519 ************************************ 00:13:07.519 START TEST nvmf_nvme_cli 00:13:07.519 ************************************ 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:07.519 * Looking for test storage... 00:13:07.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:07.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.519 --rc genhtml_branch_coverage=1 00:13:07.519 --rc genhtml_function_coverage=1 00:13:07.519 --rc genhtml_legend=1 00:13:07.519 --rc geninfo_all_blocks=1 00:13:07.519 --rc geninfo_unexecuted_blocks=1 00:13:07.519 00:13:07.519 ' 00:13:07.519 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:07.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.519 --rc genhtml_branch_coverage=1 00:13:07.519 --rc genhtml_function_coverage=1 00:13:07.519 --rc genhtml_legend=1 00:13:07.519 --rc geninfo_all_blocks=1 00:13:07.520 --rc geninfo_unexecuted_blocks=1 00:13:07.520 00:13:07.520 ' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.520 --rc genhtml_branch_coverage=1 00:13:07.520 --rc genhtml_function_coverage=1 00:13:07.520 --rc genhtml_legend=1 00:13:07.520 --rc geninfo_all_blocks=1 00:13:07.520 --rc geninfo_unexecuted_blocks=1 00:13:07.520 00:13:07.520 ' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:07.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.520 --rc genhtml_branch_coverage=1 00:13:07.520 --rc genhtml_function_coverage=1 00:13:07.520 --rc genhtml_legend=1 00:13:07.520 --rc geninfo_all_blocks=1 00:13:07.520 --rc geninfo_unexecuted_blocks=1 00:13:07.520 00:13:07.520 ' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.520 10:41:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.787 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:12.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:12.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:12.788 Found net devices under 0000:86:00.0: cvl_0_0 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:12.788 Found net devices under 0000:86:00.1: cvl_0_1 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.788 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:13:13.047 00:13:13.047 --- 10.0.0.2 ping statistics --- 00:13:13.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.047 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:13:13.047 00:13:13.047 --- 10.0.0.1 ping statistics --- 00:13:13.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.047 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2644480 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2644480 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2644480 ']' 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:13.047 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.305 [2024-11-07 10:41:40.735024] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:13.305 [2024-11-07 10:41:40.735069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.305 [2024-11-07 10:41:40.802722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.305 [2024-11-07 10:41:40.846982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.305 [2024-11-07 10:41:40.847023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.305 [2024-11-07 10:41:40.847031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.306 [2024-11-07 10:41:40.847037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.306 [2024-11-07 10:41:40.847042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.306 [2024-11-07 10:41:40.848552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.306 [2024-11-07 10:41:40.848646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.306 [2024-11-07 10:41:40.848736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.306 [2024-11-07 10:41:40.848738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.306 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.306 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:13.306 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:13.306 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:13.306 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 [2024-11-07 10:41:40.980847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 Malloc0 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 Malloc1 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 [2024-11-07 10:41:41.076542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.576 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:13.858 00:13:13.858 Discovery Log Number of Records 2, Generation counter 2 00:13:13.858 =====Discovery Log Entry 0====== 00:13:13.858 trtype: tcp 00:13:13.858 adrfam: ipv4 00:13:13.858 subtype: current discovery subsystem 00:13:13.858 treq: not required 00:13:13.858 portid: 0 00:13:13.858 trsvcid: 4420 00:13:13.858 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:13.858 traddr: 10.0.0.2 00:13:13.858 eflags: explicit discovery connections, duplicate discovery information 00:13:13.858 sectype: none 00:13:13.858 =====Discovery Log Entry 1====== 00:13:13.858 trtype: tcp 00:13:13.858 adrfam: ipv4 00:13:13.858 subtype: nvme subsystem 00:13:13.858 treq: not required 00:13:13.858 portid: 0 00:13:13.858 trsvcid: 4420 00:13:13.858 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:13.858 traddr: 10.0.0.2 00:13:13.858 eflags: none 00:13:13.858 sectype: none 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:13.858 10:41:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:14.804 10:41:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:17.333 /dev/nvme0n2 ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:17.333 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:17.334 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:17.334 10:41:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.592 rmmod nvme_tcp 00:13:17.592 rmmod nvme_fabrics 00:13:17.592 rmmod nvme_keyring 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2644480 ']' 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2644480 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2644480 ']' 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2644480 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2644480 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2644480' 00:13:17.592 killing process with pid 2644480 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2644480 00:13:17.592 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2644480 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:17.851 10:41:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.383 00:13:20.383 real 0m12.733s 00:13:20.383 user 0m19.822s 00:13:20.383 sys 0m4.917s 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.383 ************************************ 00:13:20.383 END TEST nvmf_nvme_cli 00:13:20.383 ************************************ 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.383 ************************************ 00:13:20.383 START TEST nvmf_vfio_user 00:13:20.383 ************************************ 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:20.383 * Looking for test storage... 00:13:20.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:20.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.383 --rc genhtml_branch_coverage=1 00:13:20.383 --rc genhtml_function_coverage=1 00:13:20.383 --rc genhtml_legend=1 00:13:20.383 --rc geninfo_all_blocks=1 00:13:20.383 --rc geninfo_unexecuted_blocks=1 00:13:20.383 00:13:20.383 ' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:20.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.383 --rc genhtml_branch_coverage=1 00:13:20.383 --rc genhtml_function_coverage=1 00:13:20.383 --rc genhtml_legend=1 00:13:20.383 --rc geninfo_all_blocks=1 00:13:20.383 --rc geninfo_unexecuted_blocks=1 00:13:20.383 00:13:20.383 ' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:20.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.383 --rc genhtml_branch_coverage=1 00:13:20.383 --rc genhtml_function_coverage=1 00:13:20.383 --rc genhtml_legend=1 00:13:20.383 --rc geninfo_all_blocks=1 00:13:20.383 --rc geninfo_unexecuted_blocks=1 00:13:20.383 00:13:20.383 ' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:20.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.383 --rc genhtml_branch_coverage=1 00:13:20.383 --rc genhtml_function_coverage=1 00:13:20.383 --rc genhtml_legend=1 00:13:20.383 --rc geninfo_all_blocks=1 00:13:20.383 --rc geninfo_unexecuted_blocks=1 00:13:20.383 00:13:20.383 ' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.383 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2645775 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2645775' 00:13:20.384 Process pid: 2645775 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2645775 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2645775 ']' 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.384 10:41:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:20.384 [2024-11-07 10:41:47.829124] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:20.384 [2024-11-07 10:41:47.829170] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.384 [2024-11-07 10:41:47.894019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.384 [2024-11-07 10:41:47.936607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.384 [2024-11-07 10:41:47.936644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.384 [2024-11-07 10:41:47.936651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.384 [2024-11-07 10:41:47.936658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.384 [2024-11-07 10:41:47.936663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.384 [2024-11-07 10:41:47.938217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.384 [2024-11-07 10:41:47.938242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.384 [2024-11-07 10:41:47.938331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.384 [2024-11-07 10:41:47.938333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.384 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.384 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:20.384 10:41:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:21.758 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:22.016 Malloc1 00:13:22.016 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:22.273 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:22.273 10:41:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:22.531 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:22.531 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:22.531 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:22.790 Malloc2 00:13:22.790 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:23.048 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.306 10:41:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:23.566 [2024-11-07 10:41:50.975948] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:23.566 [2024-11-07 10:41:50.975984] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646322 ] 00:13:23.566 [2024-11-07 10:41:51.015383] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:23.566 [2024-11-07 10:41:51.020691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.566 [2024-11-07 10:41:51.020715] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb886dbc000 00:13:23.566 [2024-11-07 10:41:51.021695] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.022698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.023707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.024710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.025715] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.026719] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.027727] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.028731] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:23.566 [2024-11-07 10:41:51.029740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:23.566 [2024-11-07 10:41:51.029750] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb886db1000 00:13:23.566 [2024-11-07 10:41:51.030703] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.566 [2024-11-07 10:41:51.044320] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:23.566 [2024-11-07 10:41:51.044350] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:23.566 [2024-11-07 10:41:51.046852] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.566 [2024-11-07 10:41:51.046892] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:23.566 [2024-11-07 10:41:51.046966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:23.566 [2024-11-07 10:41:51.046982] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:23.566 [2024-11-07 10:41:51.046987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:23.566 [2024-11-07 10:41:51.047859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:23.566 [2024-11-07 10:41:51.047871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:23.566 [2024-11-07 10:41:51.047878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:23.566 [2024-11-07 10:41:51.048860] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:23.566 [2024-11-07 10:41:51.048870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:23.566 [2024-11-07 10:41:51.048877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:23.566 [2024-11-07 10:41:51.049862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:23.566 [2024-11-07 10:41:51.049871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:23.567 [2024-11-07 10:41:51.050871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:23.567 [2024-11-07 10:41:51.050880] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:23.567 [2024-11-07 10:41:51.050885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:23.567 [2024-11-07 10:41:51.050891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:23.567 [2024-11-07 10:41:51.050999] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:23.567 [2024-11-07 10:41:51.051004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:23.567 [2024-11-07 10:41:51.051009] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:23.567 [2024-11-07 10:41:51.051883] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:23.567 [2024-11-07 10:41:51.052883] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:23.567 [2024-11-07 10:41:51.053887] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.567 [2024-11-07 10:41:51.054882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.567 [2024-11-07 10:41:51.054965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:23.567 [2024-11-07 10:41:51.055895] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:23.567 [2024-11-07 10:41:51.055904] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:23.567 [2024-11-07 10:41:51.055911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.055930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:23.567 [2024-11-07 10:41:51.055938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.055955] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.567 [2024-11-07 10:41:51.055961] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.567 [2024-11-07 10:41:51.055966] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.567 [2024-11-07 10:41:51.055981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056040] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:23.567 [2024-11-07 10:41:51.056046] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:23.567 [2024-11-07 10:41:51.056052] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:23.567 [2024-11-07 10:41:51.056058] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:23.567 [2024-11-07 10:41:51.056062] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:23.567 [2024-11-07 10:41:51.056067] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:23.567 [2024-11-07 10:41:51.056073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.567 [2024-11-07 10:41:51.056123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.567 [2024-11-07 10:41:51.056130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.567 [2024-11-07 10:41:51.056139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:23.567 [2024-11-07 10:41:51.056144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056177] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:23.567 [2024-11-07 10:41:51.056182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056288] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:23.567 [2024-11-07 10:41:51.056294] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:23.567 [2024-11-07 10:41:51.056298] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.567 [2024-11-07 10:41:51.056305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056331] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:23.567 [2024-11-07 10:41:51.056342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056356] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.567 [2024-11-07 10:41:51.056360] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.567 [2024-11-07 10:41:51.056364] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.567 [2024-11-07 10:41:51.056370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056420] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:23.567 [2024-11-07 10:41:51.056424] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.567 [2024-11-07 10:41:51.056427] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.567 [2024-11-07 10:41:51.056438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056496] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:23.567 [2024-11-07 10:41:51.056501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:23.567 [2024-11-07 10:41:51.056507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:23.567 [2024-11-07 10:41:51.056526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:23.567 [2024-11-07 10:41:51.056540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:23.567 [2024-11-07 10:41:51.056550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056607] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:23.568 [2024-11-07 10:41:51.056612] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:23.568 [2024-11-07 10:41:51.056615] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:23.568 [2024-11-07 10:41:51.056618] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:23.568 [2024-11-07 10:41:51.056621] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:23.568 [2024-11-07 10:41:51.056627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:23.568 [2024-11-07 10:41:51.056633] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:23.568 [2024-11-07 10:41:51.056638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:23.568 [2024-11-07 10:41:51.056642] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.568 [2024-11-07 10:41:51.056648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056657] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:23.568 [2024-11-07 10:41:51.056661] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:23.568 [2024-11-07 10:41:51.056665] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.568 [2024-11-07 10:41:51.056675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056683] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:23.568 [2024-11-07 10:41:51.056687] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:23.568 [2024-11-07 10:41:51.056690] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:23.568 [2024-11-07 10:41:51.056695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:23.568 [2024-11-07 10:41:51.056702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:23.568 [2024-11-07 10:41:51.056728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:23.568 ===================================================== 00:13:23.568 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:23.568 ===================================================== 00:13:23.568 Controller Capabilities/Features 00:13:23.568 ================================ 00:13:23.568 Vendor ID: 4e58 00:13:23.568 Subsystem Vendor ID: 4e58 00:13:23.568 Serial Number: SPDK1 00:13:23.568 Model Number: SPDK bdev Controller 00:13:23.568 Firmware Version: 25.01 00:13:23.568 Recommended Arb Burst: 6 00:13:23.568 IEEE OUI Identifier: 8d 6b 50 00:13:23.568 Multi-path I/O 00:13:23.568 May have multiple subsystem ports: Yes 00:13:23.568 May have multiple controllers: Yes 00:13:23.568 Associated with SR-IOV VF: No 00:13:23.568 Max Data Transfer Size: 131072 00:13:23.568 Max Number of Namespaces: 32 00:13:23.568 Max Number of I/O Queues: 127 00:13:23.568 NVMe Specification Version (VS): 1.3 00:13:23.568 NVMe Specification Version (Identify): 1.3 00:13:23.568 Maximum Queue Entries: 256 00:13:23.568 Contiguous Queues Required: Yes 00:13:23.568 Arbitration Mechanisms Supported 00:13:23.568 Weighted Round Robin: Not Supported 00:13:23.568 Vendor Specific: Not Supported 00:13:23.568 Reset Timeout: 15000 ms 00:13:23.568 Doorbell Stride: 4 bytes 00:13:23.568 NVM Subsystem Reset: Not Supported 00:13:23.568 Command Sets Supported 00:13:23.568 NVM Command Set: Supported 00:13:23.568 Boot Partition: Not Supported 00:13:23.568 Memory Page Size Minimum: 4096 bytes 00:13:23.568 Memory Page Size Maximum: 4096 bytes 00:13:23.568 Persistent Memory Region: Not Supported 00:13:23.568 Optional Asynchronous Events Supported 00:13:23.568 Namespace Attribute Notices: Supported 00:13:23.568 Firmware Activation Notices: Not Supported 00:13:23.568 ANA Change Notices: Not Supported 00:13:23.568 PLE Aggregate Log Change Notices: Not Supported 00:13:23.568 LBA Status Info Alert Notices: Not Supported 00:13:23.568 EGE Aggregate Log Change Notices: Not Supported 00:13:23.568 Normal NVM Subsystem Shutdown event: Not Supported 00:13:23.568 Zone Descriptor Change Notices: Not Supported 00:13:23.568 Discovery Log Change Notices: Not Supported 00:13:23.568 Controller Attributes 00:13:23.568 128-bit Host Identifier: Supported 00:13:23.568 Non-Operational Permissive Mode: Not Supported 00:13:23.568 NVM Sets: Not Supported 00:13:23.568 Read Recovery Levels: Not Supported 00:13:23.568 Endurance Groups: Not Supported 00:13:23.568 Predictable Latency Mode: Not Supported 00:13:23.568 Traffic Based Keep ALive: Not Supported 00:13:23.568 Namespace Granularity: Not Supported 00:13:23.568 SQ Associations: Not Supported 00:13:23.568 UUID List: Not Supported 00:13:23.568 Multi-Domain Subsystem: Not Supported 00:13:23.568 Fixed Capacity Management: Not Supported 00:13:23.568 Variable Capacity Management: Not Supported 00:13:23.568 Delete Endurance Group: Not Supported 00:13:23.568 Delete NVM Set: Not Supported 00:13:23.568 Extended LBA Formats Supported: Not Supported 00:13:23.568 Flexible Data Placement Supported: Not Supported 00:13:23.568 00:13:23.568 Controller Memory Buffer Support 00:13:23.568 ================================ 00:13:23.568 Supported: No 00:13:23.568 00:13:23.568 Persistent Memory Region Support 00:13:23.568 ================================ 00:13:23.568 Supported: No 00:13:23.568 00:13:23.568 Admin Command Set Attributes 00:13:23.568 ============================ 00:13:23.568 Security Send/Receive: Not Supported 00:13:23.568 Format NVM: Not Supported 00:13:23.568 Firmware Activate/Download: Not Supported 00:13:23.568 Namespace Management: Not Supported 00:13:23.568 Device Self-Test: Not Supported 00:13:23.568 Directives: Not Supported 00:13:23.568 NVMe-MI: Not Supported 00:13:23.568 Virtualization Management: Not Supported 00:13:23.568 Doorbell Buffer Config: Not Supported 00:13:23.568 Get LBA Status Capability: Not Supported 00:13:23.568 Command & Feature Lockdown Capability: Not Supported 00:13:23.568 Abort Command Limit: 4 00:13:23.568 Async Event Request Limit: 4 00:13:23.568 Number of Firmware Slots: N/A 00:13:23.568 Firmware Slot 1 Read-Only: N/A 00:13:23.568 Firmware Activation Without Reset: N/A 00:13:23.568 Multiple Update Detection Support: N/A 00:13:23.568 Firmware Update Granularity: No Information Provided 00:13:23.568 Per-Namespace SMART Log: No 00:13:23.568 Asymmetric Namespace Access Log Page: Not Supported 00:13:23.568 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:23.568 Command Effects Log Page: Supported 00:13:23.568 Get Log Page Extended Data: Supported 00:13:23.568 Telemetry Log Pages: Not Supported 00:13:23.568 Persistent Event Log Pages: Not Supported 00:13:23.568 Supported Log Pages Log Page: May Support 00:13:23.568 Commands Supported & Effects Log Page: Not Supported 00:13:23.568 Feature Identifiers & Effects Log Page:May Support 00:13:23.568 NVMe-MI Commands & Effects Log Page: May Support 00:13:23.568 Data Area 4 for Telemetry Log: Not Supported 00:13:23.568 Error Log Page Entries Supported: 128 00:13:23.568 Keep Alive: Supported 00:13:23.568 Keep Alive Granularity: 10000 ms 00:13:23.568 00:13:23.568 NVM Command Set Attributes 00:13:23.568 ========================== 00:13:23.568 Submission Queue Entry Size 00:13:23.568 Max: 64 00:13:23.568 Min: 64 00:13:23.568 Completion Queue Entry Size 00:13:23.568 Max: 16 00:13:23.568 Min: 16 00:13:23.568 Number of Namespaces: 32 00:13:23.568 Compare Command: Supported 00:13:23.568 Write Uncorrectable Command: Not Supported 00:13:23.568 Dataset Management Command: Supported 00:13:23.568 Write Zeroes Command: Supported 00:13:23.568 Set Features Save Field: Not Supported 00:13:23.568 Reservations: Not Supported 00:13:23.568 Timestamp: Not Supported 00:13:23.568 Copy: Supported 00:13:23.568 Volatile Write Cache: Present 00:13:23.568 Atomic Write Unit (Normal): 1 00:13:23.568 Atomic Write Unit (PFail): 1 00:13:23.568 Atomic Compare & Write Unit: 1 00:13:23.568 Fused Compare & Write: Supported 00:13:23.568 Scatter-Gather List 00:13:23.568 SGL Command Set: Supported (Dword aligned) 00:13:23.568 SGL Keyed: Not Supported 00:13:23.569 SGL Bit Bucket Descriptor: Not Supported 00:13:23.569 SGL Metadata Pointer: Not Supported 00:13:23.569 Oversized SGL: Not Supported 00:13:23.569 SGL Metadata Address: Not Supported 00:13:23.569 SGL Offset: Not Supported 00:13:23.569 Transport SGL Data Block: Not Supported 00:13:23.569 Replay Protected Memory Block: Not Supported 00:13:23.569 00:13:23.569 Firmware Slot Information 00:13:23.569 ========================= 00:13:23.569 Active slot: 1 00:13:23.569 Slot 1 Firmware Revision: 25.01 00:13:23.569 00:13:23.569 00:13:23.569 Commands Supported and Effects 00:13:23.569 ============================== 00:13:23.569 Admin Commands 00:13:23.569 -------------- 00:13:23.569 Get Log Page (02h): Supported 00:13:23.569 Identify (06h): Supported 00:13:23.569 Abort (08h): Supported 00:13:23.569 Set Features (09h): Supported 00:13:23.569 Get Features (0Ah): Supported 00:13:23.569 Asynchronous Event Request (0Ch): Supported 00:13:23.569 Keep Alive (18h): Supported 00:13:23.569 I/O Commands 00:13:23.569 ------------ 00:13:23.569 Flush (00h): Supported LBA-Change 00:13:23.569 Write (01h): Supported LBA-Change 00:13:23.569 Read (02h): Supported 00:13:23.569 Compare (05h): Supported 00:13:23.569 Write Zeroes (08h): Supported LBA-Change 00:13:23.569 Dataset Management (09h): Supported LBA-Change 00:13:23.569 Copy (19h): Supported LBA-Change 00:13:23.569 00:13:23.569 Error Log 00:13:23.569 ========= 00:13:23.569 00:13:23.569 Arbitration 00:13:23.569 =========== 00:13:23.569 Arbitration Burst: 1 00:13:23.569 00:13:23.569 Power Management 00:13:23.569 ================ 00:13:23.569 Number of Power States: 1 00:13:23.569 Current Power State: Power State #0 00:13:23.569 Power State #0: 00:13:23.569 Max Power: 0.00 W 00:13:23.569 Non-Operational State: Operational 00:13:23.569 Entry Latency: Not Reported 00:13:23.569 Exit Latency: Not Reported 00:13:23.569 Relative Read Throughput: 0 00:13:23.569 Relative Read Latency: 0 00:13:23.569 Relative Write Throughput: 0 00:13:23.569 Relative Write Latency: 0 00:13:23.569 Idle Power: Not Reported 00:13:23.569 Active Power: Not Reported 00:13:23.569 Non-Operational Permissive Mode: Not Supported 00:13:23.569 00:13:23.569 Health Information 00:13:23.569 ================== 00:13:23.569 Critical Warnings: 00:13:23.569 Available Spare Space: OK 00:13:23.569 Temperature: OK 00:13:23.569 Device Reliability: OK 00:13:23.569 Read Only: No 00:13:23.569 Volatile Memory Backup: OK 00:13:23.569 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:23.569 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:23.569 Available Spare: 0% 00:13:23.569 Available Sp[2024-11-07 10:41:51.056818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:23.569 [2024-11-07 10:41:51.056825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:23.569 [2024-11-07 10:41:51.056850] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:23.569 [2024-11-07 10:41:51.056858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.569 [2024-11-07 10:41:51.056864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.569 [2024-11-07 10:41:51.056869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.569 [2024-11-07 10:41:51.056874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:23.569 [2024-11-07 10:41:51.059444] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:23.569 [2024-11-07 10:41:51.059456] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:23.569 [2024-11-07 10:41:51.059917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.569 [2024-11-07 10:41:51.059970] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:23.569 [2024-11-07 10:41:51.059977] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:23.569 [2024-11-07 10:41:51.060924] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:23.569 [2024-11-07 10:41:51.060936] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:23.569 [2024-11-07 10:41:51.060984] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:23.569 [2024-11-07 10:41:51.062961] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:23.569 are Threshold: 0% 00:13:23.569 Life Percentage Used: 0% 00:13:23.569 Data Units Read: 0 00:13:23.569 Data Units Written: 0 00:13:23.569 Host Read Commands: 0 00:13:23.569 Host Write Commands: 0 00:13:23.569 Controller Busy Time: 0 minutes 00:13:23.569 Power Cycles: 0 00:13:23.569 Power On Hours: 0 hours 00:13:23.569 Unsafe Shutdowns: 0 00:13:23.569 Unrecoverable Media Errors: 0 00:13:23.569 Lifetime Error Log Entries: 0 00:13:23.569 Warning Temperature Time: 0 minutes 00:13:23.569 Critical Temperature Time: 0 minutes 00:13:23.569 00:13:23.569 Number of Queues 00:13:23.569 ================ 00:13:23.569 Number of I/O Submission Queues: 127 00:13:23.569 Number of I/O Completion Queues: 127 00:13:23.569 00:13:23.569 Active Namespaces 00:13:23.569 ================= 00:13:23.569 Namespace ID:1 00:13:23.569 Error Recovery Timeout: Unlimited 00:13:23.569 Command Set Identifier: NVM (00h) 00:13:23.569 Deallocate: Supported 00:13:23.569 Deallocated/Unwritten Error: Not Supported 00:13:23.569 Deallocated Read Value: Unknown 00:13:23.569 Deallocate in Write Zeroes: Not Supported 00:13:23.569 Deallocated Guard Field: 0xFFFF 00:13:23.569 Flush: Supported 00:13:23.569 Reservation: Supported 00:13:23.569 Namespace Sharing Capabilities: Multiple Controllers 00:13:23.569 Size (in LBAs): 131072 (0GiB) 00:13:23.569 Capacity (in LBAs): 131072 (0GiB) 00:13:23.569 Utilization (in LBAs): 131072 (0GiB) 00:13:23.569 NGUID: E37C8D5358EE4F399BACE38A1E55080E 00:13:23.569 UUID: e37c8d53-58ee-4f39-9bac-e38a1e55080e 00:13:23.569 Thin Provisioning: Not Supported 00:13:23.569 Per-NS Atomic Units: Yes 00:13:23.569 Atomic Boundary Size (Normal): 0 00:13:23.569 Atomic Boundary Size (PFail): 0 00:13:23.569 Atomic Boundary Offset: 0 00:13:23.569 Maximum Single Source Range Length: 65535 00:13:23.569 Maximum Copy Length: 65535 00:13:23.569 Maximum Source Range Count: 1 00:13:23.569 NGUID/EUI64 Never Reused: No 00:13:23.569 Namespace Write Protected: No 00:13:23.569 Number of LBA Formats: 1 00:13:23.569 Current LBA Format: LBA Format #00 00:13:23.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:23.569 00:13:23.569 10:41:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:23.827 [2024-11-07 10:41:51.300255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.092 Initializing NVMe Controllers 00:13:29.092 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:29.092 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:29.092 Initialization complete. Launching workers. 00:13:29.092 ======================================================== 00:13:29.092 Latency(us) 00:13:29.092 Device Information : IOPS MiB/s Average min max 00:13:29.092 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39955.17 156.07 3203.89 971.50 8547.59 00:13:29.092 ======================================================== 00:13:29.092 Total : 39955.17 156.07 3203.89 971.50 8547.59 00:13:29.092 00:13:29.092 [2024-11-07 10:41:56.318199] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.092 10:41:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:29.092 [2024-11-07 10:41:56.553249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.356 Initializing NVMe Controllers 00:13:34.356 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:34.356 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:34.356 Initialization complete. Launching workers. 00:13:34.356 ======================================================== 00:13:34.356 Latency(us) 00:13:34.356 Device Information : IOPS MiB/s Average min max 00:13:34.356 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16063.87 62.75 7973.54 4986.32 9976.85 00:13:34.356 ======================================================== 00:13:34.356 Total : 16063.87 62.75 7973.54 4986.32 9976.85 00:13:34.356 00:13:34.356 [2024-11-07 10:42:01.590872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.356 10:42:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:34.356 [2024-11-07 10:42:01.802856] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.621 [2024-11-07 10:42:06.882690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.621 Initializing NVMe Controllers 00:13:39.621 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.621 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:39.621 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:39.621 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:39.621 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:39.621 Initialization complete. Launching workers. 00:13:39.621 Starting thread on core 2 00:13:39.621 Starting thread on core 3 00:13:39.621 Starting thread on core 1 00:13:39.621 10:42:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:39.621 [2024-11-07 10:42:07.177076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.912 [2024-11-07 10:42:10.247305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.912 Initializing NVMe Controllers 00:13:42.912 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.912 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:42.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:42.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:42.912 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:42.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:42.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:42.912 Initialization complete. Launching workers. 00:13:42.912 Starting thread on core 1 with urgent priority queue 00:13:42.912 Starting thread on core 2 with urgent priority queue 00:13:42.912 Starting thread on core 3 with urgent priority queue 00:13:42.912 Starting thread on core 0 with urgent priority queue 00:13:42.912 SPDK bdev Controller (SPDK1 ) core 0: 7994.33 IO/s 12.51 secs/100000 ios 00:13:42.912 SPDK bdev Controller (SPDK1 ) core 1: 8300.00 IO/s 12.05 secs/100000 ios 00:13:42.912 SPDK bdev Controller (SPDK1 ) core 2: 8378.00 IO/s 11.94 secs/100000 ios 00:13:42.912 SPDK bdev Controller (SPDK1 ) core 3: 9409.67 IO/s 10.63 secs/100000 ios 00:13:42.912 ======================================================== 00:13:42.912 00:13:42.912 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:42.912 [2024-11-07 10:42:10.546910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.912 Initializing NVMe Controllers 00:13:42.912 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.912 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:42.912 Namespace ID: 1 size: 0GB 00:13:42.912 Initialization complete. 00:13:42.912 INFO: using host memory buffer for IO 00:13:42.912 Hello world! 00:13:43.170 [2024-11-07 10:42:10.583169] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:43.170 10:42:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:43.428 [2024-11-07 10:42:10.870860] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.363 Initializing NVMe Controllers 00:13:44.363 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.363 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:44.364 Initialization complete. Launching workers. 00:13:44.364 submit (in ns) avg, min, max = 6607.7, 3267.8, 4000122.6 00:13:44.364 complete (in ns) avg, min, max = 19996.5, 1870.4, 3998271.3 00:13:44.364 00:13:44.364 Submit histogram 00:13:44.364 ================ 00:13:44.364 Range in us Cumulative Count 00:13:44.364 3.256 - 3.270: 0.0121% ( 2) 00:13:44.364 3.270 - 3.283: 0.1448% ( 22) 00:13:44.364 3.283 - 3.297: 0.9591% ( 135) 00:13:44.364 3.297 - 3.311: 2.1896% ( 204) 00:13:44.364 3.311 - 3.325: 4.1440% ( 324) 00:13:44.364 3.325 - 3.339: 7.6065% ( 574) 00:13:44.364 3.339 - 3.353: 12.9992% ( 894) 00:13:44.364 3.353 - 3.367: 18.4099% ( 897) 00:13:44.364 3.367 - 3.381: 24.5084% ( 1011) 00:13:44.364 3.381 - 3.395: 30.7154% ( 1029) 00:13:44.364 3.395 - 3.409: 35.8125% ( 845) 00:13:44.364 3.409 - 3.423: 40.7890% ( 825) 00:13:44.364 3.423 - 3.437: 46.0852% ( 878) 00:13:44.364 3.437 - 3.450: 51.3150% ( 867) 00:13:44.364 3.450 - 3.464: 55.7003% ( 727) 00:13:44.364 3.464 - 3.478: 60.7432% ( 836) 00:13:44.364 3.478 - 3.492: 67.6378% ( 1143) 00:13:44.364 3.492 - 3.506: 72.3670% ( 784) 00:13:44.364 3.506 - 3.520: 76.1913% ( 634) 00:13:44.364 3.520 - 3.534: 80.5948% ( 730) 00:13:44.364 3.534 - 3.548: 83.8219% ( 535) 00:13:44.364 3.548 - 3.562: 85.8186% ( 331) 00:13:44.364 3.562 - 3.590: 87.3266% ( 250) 00:13:44.364 3.590 - 3.617: 88.1711% ( 140) 00:13:44.364 3.617 - 3.645: 89.5343% ( 226) 00:13:44.364 3.645 - 3.673: 91.4103% ( 311) 00:13:44.364 3.673 - 3.701: 93.2380% ( 303) 00:13:44.364 3.701 - 3.729: 94.9873% ( 290) 00:13:44.364 3.729 - 3.757: 96.5135% ( 253) 00:13:44.364 3.757 - 3.784: 97.6475% ( 188) 00:13:44.364 3.784 - 3.812: 98.5644% ( 152) 00:13:44.364 3.812 - 3.840: 99.0771% ( 85) 00:13:44.364 3.840 - 3.868: 99.4028% ( 54) 00:13:44.364 3.868 - 3.896: 99.5778% ( 29) 00:13:44.364 3.896 - 3.923: 99.6441% ( 11) 00:13:44.364 3.923 - 3.951: 99.6501% ( 1) 00:13:44.364 3.951 - 3.979: 99.6562% ( 1) 00:13:44.364 5.343 - 5.370: 99.6622% ( 1) 00:13:44.364 5.398 - 5.426: 99.6682% ( 1) 00:13:44.364 5.482 - 5.510: 99.6743% ( 1) 00:13:44.364 5.537 - 5.565: 99.6984% ( 4) 00:13:44.364 5.593 - 5.621: 99.7105% ( 2) 00:13:44.364 5.621 - 5.649: 99.7165% ( 1) 00:13:44.364 5.677 - 5.704: 99.7225% ( 1) 00:13:44.364 5.704 - 5.732: 99.7286% ( 1) 00:13:44.364 5.843 - 5.871: 99.7346% ( 1) 00:13:44.364 6.094 - 6.122: 99.7406% ( 1) 00:13:44.364 6.233 - 6.261: 99.7467% ( 1) 00:13:44.364 6.261 - 6.289: 99.7587% ( 2) 00:13:44.364 6.372 - 6.400: 99.7647% ( 1) 00:13:44.364 6.428 - 6.456: 99.7708% ( 1) 00:13:44.364 6.511 - 6.539: 99.7768% ( 1) 00:13:44.364 6.567 - 6.595: 99.7828% ( 1) 00:13:44.364 6.790 - 6.817: 99.7889% ( 1) 00:13:44.364 6.901 - 6.929: 99.7949% ( 1) 00:13:44.364 6.957 - 6.984: 99.8009% ( 1) 00:13:44.364 6.984 - 7.012: 99.8070% ( 1) 00:13:44.364 7.179 - 7.235: 99.8130% ( 1) 00:13:44.364 7.235 - 7.290: 99.8190% ( 1) 00:13:44.364 7.290 - 7.346: 99.8371% ( 3) 00:13:44.364 7.346 - 7.402: 99.8432% ( 1) 00:13:44.364 7.513 - 7.569: 99.8492% ( 1) 00:13:44.364 7.569 - 7.624: 99.8552% ( 1) 00:13:44.364 7.736 - 7.791: 99.8613% ( 1) 00:13:44.364 7.791 - 7.847: 99.8673% ( 1) 00:13:44.364 7.847 - 7.903: 99.8733% ( 1) 00:13:44.364 7.903 - 7.958: 99.8794% ( 1) 00:13:44.364 8.682 - 8.737: 99.8854% ( 1) 00:13:44.364 8.737 - 8.793: 99.8914% ( 1) 00:13:44.364 8.849 - 8.904: 99.9035% ( 2) 00:13:44.364 9.016 - 9.071: 99.9095% ( 1) 00:13:44.364 9.572 - 9.628: 99.9156% ( 1) 00:13:44.364 11.910 - 11.965: 99.9216% ( 1) 00:13:44.364 3989.148 - 4017.642: 100.0000% ( 13) 00:13:44.364 00:13:44.364 Complete histogram 00:13:44.364 ================== 00:13:44.364 Range in us Cumulative Count 00:13:44.364 1.864 - 1.878: 0.0302% ( 5) 00:13:44.364 1.878 - 1.892: 0.9832% ( 158) 00:13:44.364 1.892 - 1.906: 2.9014% ( 318) 00:13:44.364 1.906 - [2024-11-07 10:42:11.892693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:44.364 1.920: 5.2962% ( 397) 00:13:44.364 1.920 - 1.934: 32.5612% ( 4520) 00:13:44.364 1.934 - 1.948: 79.7080% ( 7816) 00:13:44.364 1.948 - 1.962: 95.0838% ( 2549) 00:13:44.364 1.962 - 1.976: 98.3351% ( 539) 00:13:44.364 1.976 - 1.990: 99.1615% ( 137) 00:13:44.364 1.990 - 2.003: 99.2701% ( 18) 00:13:44.364 2.003 - 2.017: 99.2882% ( 3) 00:13:44.364 2.017 - 2.031: 99.3003% ( 2) 00:13:44.364 2.045 - 2.059: 99.3063% ( 1) 00:13:44.364 2.087 - 2.101: 99.3123% ( 1) 00:13:44.364 2.212 - 2.226: 99.3184% ( 1) 00:13:44.364 3.784 - 3.812: 99.3244% ( 1) 00:13:44.364 3.923 - 3.951: 99.3304% ( 1) 00:13:44.364 3.951 - 3.979: 99.3365% ( 1) 00:13:44.364 3.979 - 4.007: 99.3425% ( 1) 00:13:44.364 4.090 - 4.118: 99.3606% ( 3) 00:13:44.364 4.118 - 4.146: 99.3727% ( 2) 00:13:44.364 4.174 - 4.202: 99.3787% ( 1) 00:13:44.364 4.257 - 4.285: 99.3908% ( 2) 00:13:44.364 4.341 - 4.369: 99.3968% ( 1) 00:13:44.364 4.508 - 4.536: 99.4028% ( 1) 00:13:44.364 5.037 - 5.064: 99.4089% ( 1) 00:13:44.364 5.203 - 5.231: 99.4149% ( 1) 00:13:44.364 5.231 - 5.259: 99.4209% ( 1) 00:13:44.364 5.315 - 5.343: 99.4270% ( 1) 00:13:44.364 5.370 - 5.398: 99.4330% ( 1) 00:13:44.364 5.398 - 5.426: 99.4390% ( 1) 00:13:44.364 5.482 - 5.510: 99.4450% ( 1) 00:13:44.364 5.510 - 5.537: 99.4511% ( 1) 00:13:44.364 5.537 - 5.565: 99.4631% ( 2) 00:13:44.364 5.677 - 5.704: 99.4692% ( 1) 00:13:44.364 5.704 - 5.732: 99.4752% ( 1) 00:13:44.364 5.760 - 5.788: 99.4812% ( 1) 00:13:44.364 5.816 - 5.843: 99.4873% ( 1) 00:13:44.364 5.955 - 5.983: 99.4933% ( 1) 00:13:44.364 6.150 - 6.177: 99.4993% ( 1) 00:13:44.364 6.289 - 6.317: 99.5054% ( 1) 00:13:44.364 6.706 - 6.734: 99.5114% ( 1) 00:13:44.364 7.096 - 7.123: 99.5174% ( 1) 00:13:44.364 7.123 - 7.179: 99.5235% ( 1) 00:13:44.364 7.457 - 7.513: 99.5295% ( 1) 00:13:44.364 7.847 - 7.903: 99.5355% ( 1) 00:13:44.364 8.070 - 8.125: 99.5416% ( 1) 00:13:44.364 15.583 - 15.694: 99.5476% ( 1) 00:13:44.364 3647.221 - 3675.715: 99.5536% ( 1) 00:13:44.364 3989.148 - 4017.642: 100.0000% ( 74) 00:13:44.364 00:13:44.364 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:44.364 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:44.364 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:44.364 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:44.364 10:42:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:44.622 [ 00:13:44.622 { 00:13:44.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.622 "subtype": "Discovery", 00:13:44.622 "listen_addresses": [], 00:13:44.622 "allow_any_host": true, 00:13:44.622 "hosts": [] 00:13:44.623 }, 00:13:44.623 { 00:13:44.623 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:44.623 "subtype": "NVMe", 00:13:44.623 "listen_addresses": [ 00:13:44.623 { 00:13:44.623 "trtype": "VFIOUSER", 00:13:44.623 "adrfam": "IPv4", 00:13:44.623 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:44.623 "trsvcid": "0" 00:13:44.623 } 00:13:44.623 ], 00:13:44.623 "allow_any_host": true, 00:13:44.623 "hosts": [], 00:13:44.623 "serial_number": "SPDK1", 00:13:44.623 "model_number": "SPDK bdev Controller", 00:13:44.623 "max_namespaces": 32, 00:13:44.623 "min_cntlid": 1, 00:13:44.623 "max_cntlid": 65519, 00:13:44.623 "namespaces": [ 00:13:44.623 { 00:13:44.623 "nsid": 1, 00:13:44.623 "bdev_name": "Malloc1", 00:13:44.623 "name": "Malloc1", 00:13:44.623 "nguid": "E37C8D5358EE4F399BACE38A1E55080E", 00:13:44.623 "uuid": "e37c8d53-58ee-4f39-9bac-e38a1e55080e" 00:13:44.623 } 00:13:44.623 ] 00:13:44.623 }, 00:13:44.623 { 00:13:44.623 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:44.623 "subtype": "NVMe", 00:13:44.623 "listen_addresses": [ 00:13:44.623 { 00:13:44.623 "trtype": "VFIOUSER", 00:13:44.623 "adrfam": "IPv4", 00:13:44.623 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:44.623 "trsvcid": "0" 00:13:44.623 } 00:13:44.623 ], 00:13:44.623 "allow_any_host": true, 00:13:44.623 "hosts": [], 00:13:44.623 "serial_number": "SPDK2", 00:13:44.623 "model_number": "SPDK bdev Controller", 00:13:44.623 "max_namespaces": 32, 00:13:44.623 "min_cntlid": 1, 00:13:44.623 "max_cntlid": 65519, 00:13:44.623 "namespaces": [ 00:13:44.623 { 00:13:44.623 "nsid": 1, 00:13:44.623 "bdev_name": "Malloc2", 00:13:44.623 "name": "Malloc2", 00:13:44.623 "nguid": "BE3C0F066B774B4CA84DF7649A542125", 00:13:44.623 "uuid": "be3c0f06-6b77-4b4c-a84d-f7649a542125" 00:13:44.623 } 00:13:44.623 ] 00:13:44.623 } 00:13:44.623 ] 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2650415 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:44.623 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:44.881 [2024-11-07 10:42:12.317924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:44.881 Malloc3 00:13:44.881 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:45.140 [2024-11-07 10:42:12.561778] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.140 Asynchronous Event Request test 00:13:45.140 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.140 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:45.140 Registering asynchronous event callbacks... 00:13:45.140 Starting namespace attribute notice tests for all controllers... 00:13:45.140 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:45.140 aer_cb - Changed Namespace 00:13:45.140 Cleaning up... 00:13:45.140 [ 00:13:45.140 { 00:13:45.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.140 "subtype": "Discovery", 00:13:45.140 "listen_addresses": [], 00:13:45.140 "allow_any_host": true, 00:13:45.140 "hosts": [] 00:13:45.140 }, 00:13:45.140 { 00:13:45.140 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.140 "subtype": "NVMe", 00:13:45.140 "listen_addresses": [ 00:13:45.140 { 00:13:45.140 "trtype": "VFIOUSER", 00:13:45.140 "adrfam": "IPv4", 00:13:45.140 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.140 "trsvcid": "0" 00:13:45.140 } 00:13:45.140 ], 00:13:45.140 "allow_any_host": true, 00:13:45.140 "hosts": [], 00:13:45.140 "serial_number": "SPDK1", 00:13:45.140 "model_number": "SPDK bdev Controller", 00:13:45.140 "max_namespaces": 32, 00:13:45.140 "min_cntlid": 1, 00:13:45.140 "max_cntlid": 65519, 00:13:45.140 "namespaces": [ 00:13:45.140 { 00:13:45.140 "nsid": 1, 00:13:45.140 "bdev_name": "Malloc1", 00:13:45.140 "name": "Malloc1", 00:13:45.140 "nguid": "E37C8D5358EE4F399BACE38A1E55080E", 00:13:45.140 "uuid": "e37c8d53-58ee-4f39-9bac-e38a1e55080e" 00:13:45.140 }, 00:13:45.140 { 00:13:45.140 "nsid": 2, 00:13:45.140 "bdev_name": "Malloc3", 00:13:45.140 "name": "Malloc3", 00:13:45.140 "nguid": "C3753B15DFF442C6A892499D05E00A35", 00:13:45.140 "uuid": "c3753b15-dff4-42c6-a892-499d05e00a35" 00:13:45.140 } 00:13:45.140 ] 00:13:45.140 }, 00:13:45.140 { 00:13:45.140 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.140 "subtype": "NVMe", 00:13:45.140 "listen_addresses": [ 00:13:45.140 { 00:13:45.140 "trtype": "VFIOUSER", 00:13:45.140 "adrfam": "IPv4", 00:13:45.140 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.140 "trsvcid": "0" 00:13:45.140 } 00:13:45.140 ], 00:13:45.140 "allow_any_host": true, 00:13:45.140 "hosts": [], 00:13:45.140 "serial_number": "SPDK2", 00:13:45.140 "model_number": "SPDK bdev Controller", 00:13:45.140 "max_namespaces": 32, 00:13:45.140 "min_cntlid": 1, 00:13:45.140 "max_cntlid": 65519, 00:13:45.140 "namespaces": [ 00:13:45.140 { 00:13:45.140 "nsid": 1, 00:13:45.140 "bdev_name": "Malloc2", 00:13:45.140 "name": "Malloc2", 00:13:45.140 "nguid": "BE3C0F066B774B4CA84DF7649A542125", 00:13:45.140 "uuid": "be3c0f06-6b77-4b4c-a84d-f7649a542125" 00:13:45.140 } 00:13:45.140 ] 00:13:45.140 } 00:13:45.140 ] 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2650415 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:45.140 10:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:45.140 [2024-11-07 10:42:12.806234] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:13:45.140 [2024-11-07 10:42:12.806282] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2650474 ] 00:13:45.400 [2024-11-07 10:42:12.847230] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:45.400 [2024-11-07 10:42:12.851479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.400 [2024-11-07 10:42:12.851502] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f52c553d000 00:13:45.400 [2024-11-07 10:42:12.852478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.853489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.854497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.855503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.856506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.857510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.858516] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.859519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:45.400 [2024-11-07 10:42:12.860533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:45.400 [2024-11-07 10:42:12.860543] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f52c5532000 00:13:45.400 [2024-11-07 10:42:12.861482] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.400 [2024-11-07 10:42:12.871004] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:45.400 [2024-11-07 10:42:12.871027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:45.401 [2024-11-07 10:42:12.876102] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:45.401 [2024-11-07 10:42:12.876143] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:45.401 [2024-11-07 10:42:12.876215] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:45.401 [2024-11-07 10:42:12.876228] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:45.401 [2024-11-07 10:42:12.876233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:45.401 [2024-11-07 10:42:12.877118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:45.401 [2024-11-07 10:42:12.877129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:45.401 [2024-11-07 10:42:12.877137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:45.401 [2024-11-07 10:42:12.878124] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:45.401 [2024-11-07 10:42:12.878133] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:45.401 [2024-11-07 10:42:12.878140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.879130] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:45.401 [2024-11-07 10:42:12.879139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.880138] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:45.401 [2024-11-07 10:42:12.880147] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:45.401 [2024-11-07 10:42:12.880152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.880158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.880265] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:45.401 [2024-11-07 10:42:12.880269] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.880276] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:45.401 [2024-11-07 10:42:12.881140] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:45.401 [2024-11-07 10:42:12.882148] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:45.401 [2024-11-07 10:42:12.883158] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:45.401 [2024-11-07 10:42:12.884162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.401 [2024-11-07 10:42:12.884198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:45.401 [2024-11-07 10:42:12.885172] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:45.401 [2024-11-07 10:42:12.885181] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:45.401 [2024-11-07 10:42:12.885186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.885202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:45.401 [2024-11-07 10:42:12.885209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.885223] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.401 [2024-11-07 10:42:12.885228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.401 [2024-11-07 10:42:12.885231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.401 [2024-11-07 10:42:12.885242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.892440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:45.401 [2024-11-07 10:42:12.892450] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:45.401 [2024-11-07 10:42:12.892455] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:45.401 [2024-11-07 10:42:12.892459] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:45.401 [2024-11-07 10:42:12.892464] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:45.401 [2024-11-07 10:42:12.892468] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:45.401 [2024-11-07 10:42:12.892472] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:45.401 [2024-11-07 10:42:12.892477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.892483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.892493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.900439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:45.401 [2024-11-07 10:42:12.900453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.401 [2024-11-07 10:42:12.900461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.401 [2024-11-07 10:42:12.900469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.401 [2024-11-07 10:42:12.900476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:45.401 [2024-11-07 10:42:12.900480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.900491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.900499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.908438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:45.401 [2024-11-07 10:42:12.908446] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:45.401 [2024-11-07 10:42:12.908450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.908459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.908464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.908472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.916447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:45.401 [2024-11-07 10:42:12.916502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.916510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.916518] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:45.401 [2024-11-07 10:42:12.916522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:45.401 [2024-11-07 10:42:12.916525] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.401 [2024-11-07 10:42:12.916531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.924438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:45.401 [2024-11-07 10:42:12.924451] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:45.401 [2024-11-07 10:42:12.924462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.924469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:45.401 [2024-11-07 10:42:12.924477] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.401 [2024-11-07 10:42:12.924482] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.401 [2024-11-07 10:42:12.924485] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.401 [2024-11-07 10:42:12.924491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.401 [2024-11-07 10:42:12.932439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.932450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.932457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.932463] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:45.402 [2024-11-07 10:42:12.932467] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.402 [2024-11-07 10:42:12.932470] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.402 [2024-11-07 10:42:12.932476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.940437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.940449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940481] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:45.402 [2024-11-07 10:42:12.940486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:45.402 [2024-11-07 10:42:12.940490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:45.402 [2024-11-07 10:42:12.940506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.948442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.948456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.956441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.956455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.964438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.964451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.972439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.972455] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:45.402 [2024-11-07 10:42:12.972460] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:45.402 [2024-11-07 10:42:12.972463] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:45.402 [2024-11-07 10:42:12.972466] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:45.402 [2024-11-07 10:42:12.972470] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:45.402 [2024-11-07 10:42:12.972476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:45.402 [2024-11-07 10:42:12.972482] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:45.402 [2024-11-07 10:42:12.972486] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:45.402 [2024-11-07 10:42:12.972490] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.402 [2024-11-07 10:42:12.972495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.972501] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:45.402 [2024-11-07 10:42:12.972506] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:45.402 [2024-11-07 10:42:12.972509] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.402 [2024-11-07 10:42:12.972514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.972521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:45.402 [2024-11-07 10:42:12.972525] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:45.402 [2024-11-07 10:42:12.972528] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:45.402 [2024-11-07 10:42:12.972533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:45.402 [2024-11-07 10:42:12.980438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.980452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.980462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:45.402 [2024-11-07 10:42:12.980468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:45.402 ===================================================== 00:13:45.402 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.402 ===================================================== 00:13:45.402 Controller Capabilities/Features 00:13:45.402 ================================ 00:13:45.402 Vendor ID: 4e58 00:13:45.402 Subsystem Vendor ID: 4e58 00:13:45.402 Serial Number: SPDK2 00:13:45.402 Model Number: SPDK bdev Controller 00:13:45.402 Firmware Version: 25.01 00:13:45.402 Recommended Arb Burst: 6 00:13:45.402 IEEE OUI Identifier: 8d 6b 50 00:13:45.402 Multi-path I/O 00:13:45.402 May have multiple subsystem ports: Yes 00:13:45.402 May have multiple controllers: Yes 00:13:45.402 Associated with SR-IOV VF: No 00:13:45.402 Max Data Transfer Size: 131072 00:13:45.402 Max Number of Namespaces: 32 00:13:45.402 Max Number of I/O Queues: 127 00:13:45.402 NVMe Specification Version (VS): 1.3 00:13:45.402 NVMe Specification Version (Identify): 1.3 00:13:45.402 Maximum Queue Entries: 256 00:13:45.402 Contiguous Queues Required: Yes 00:13:45.402 Arbitration Mechanisms Supported 00:13:45.402 Weighted Round Robin: Not Supported 00:13:45.402 Vendor Specific: Not Supported 00:13:45.402 Reset Timeout: 15000 ms 00:13:45.402 Doorbell Stride: 4 bytes 00:13:45.402 NVM Subsystem Reset: Not Supported 00:13:45.402 Command Sets Supported 00:13:45.402 NVM Command Set: Supported 00:13:45.402 Boot Partition: Not Supported 00:13:45.402 Memory Page Size Minimum: 4096 bytes 00:13:45.402 Memory Page Size Maximum: 4096 bytes 00:13:45.402 Persistent Memory Region: Not Supported 00:13:45.402 Optional Asynchronous Events Supported 00:13:45.402 Namespace Attribute Notices: Supported 00:13:45.402 Firmware Activation Notices: Not Supported 00:13:45.402 ANA Change Notices: Not Supported 00:13:45.402 PLE Aggregate Log Change Notices: Not Supported 00:13:45.402 LBA Status Info Alert Notices: Not Supported 00:13:45.402 EGE Aggregate Log Change Notices: Not Supported 00:13:45.402 Normal NVM Subsystem Shutdown event: Not Supported 00:13:45.402 Zone Descriptor Change Notices: Not Supported 00:13:45.402 Discovery Log Change Notices: Not Supported 00:13:45.402 Controller Attributes 00:13:45.402 128-bit Host Identifier: Supported 00:13:45.402 Non-Operational Permissive Mode: Not Supported 00:13:45.402 NVM Sets: Not Supported 00:13:45.402 Read Recovery Levels: Not Supported 00:13:45.402 Endurance Groups: Not Supported 00:13:45.402 Predictable Latency Mode: Not Supported 00:13:45.402 Traffic Based Keep ALive: Not Supported 00:13:45.402 Namespace Granularity: Not Supported 00:13:45.402 SQ Associations: Not Supported 00:13:45.402 UUID List: Not Supported 00:13:45.402 Multi-Domain Subsystem: Not Supported 00:13:45.402 Fixed Capacity Management: Not Supported 00:13:45.402 Variable Capacity Management: Not Supported 00:13:45.402 Delete Endurance Group: Not Supported 00:13:45.402 Delete NVM Set: Not Supported 00:13:45.402 Extended LBA Formats Supported: Not Supported 00:13:45.402 Flexible Data Placement Supported: Not Supported 00:13:45.402 00:13:45.402 Controller Memory Buffer Support 00:13:45.402 ================================ 00:13:45.402 Supported: No 00:13:45.402 00:13:45.402 Persistent Memory Region Support 00:13:45.402 ================================ 00:13:45.402 Supported: No 00:13:45.402 00:13:45.402 Admin Command Set Attributes 00:13:45.402 ============================ 00:13:45.403 Security Send/Receive: Not Supported 00:13:45.403 Format NVM: Not Supported 00:13:45.403 Firmware Activate/Download: Not Supported 00:13:45.403 Namespace Management: Not Supported 00:13:45.403 Device Self-Test: Not Supported 00:13:45.403 Directives: Not Supported 00:13:45.403 NVMe-MI: Not Supported 00:13:45.403 Virtualization Management: Not Supported 00:13:45.403 Doorbell Buffer Config: Not Supported 00:13:45.403 Get LBA Status Capability: Not Supported 00:13:45.403 Command & Feature Lockdown Capability: Not Supported 00:13:45.403 Abort Command Limit: 4 00:13:45.403 Async Event Request Limit: 4 00:13:45.403 Number of Firmware Slots: N/A 00:13:45.403 Firmware Slot 1 Read-Only: N/A 00:13:45.403 Firmware Activation Without Reset: N/A 00:13:45.403 Multiple Update Detection Support: N/A 00:13:45.403 Firmware Update Granularity: No Information Provided 00:13:45.403 Per-Namespace SMART Log: No 00:13:45.403 Asymmetric Namespace Access Log Page: Not Supported 00:13:45.403 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:45.403 Command Effects Log Page: Supported 00:13:45.403 Get Log Page Extended Data: Supported 00:13:45.403 Telemetry Log Pages: Not Supported 00:13:45.403 Persistent Event Log Pages: Not Supported 00:13:45.403 Supported Log Pages Log Page: May Support 00:13:45.403 Commands Supported & Effects Log Page: Not Supported 00:13:45.403 Feature Identifiers & Effects Log Page:May Support 00:13:45.403 NVMe-MI Commands & Effects Log Page: May Support 00:13:45.403 Data Area 4 for Telemetry Log: Not Supported 00:13:45.403 Error Log Page Entries Supported: 128 00:13:45.403 Keep Alive: Supported 00:13:45.403 Keep Alive Granularity: 10000 ms 00:13:45.403 00:13:45.403 NVM Command Set Attributes 00:13:45.403 ========================== 00:13:45.403 Submission Queue Entry Size 00:13:45.403 Max: 64 00:13:45.403 Min: 64 00:13:45.403 Completion Queue Entry Size 00:13:45.403 Max: 16 00:13:45.403 Min: 16 00:13:45.403 Number of Namespaces: 32 00:13:45.403 Compare Command: Supported 00:13:45.403 Write Uncorrectable Command: Not Supported 00:13:45.403 Dataset Management Command: Supported 00:13:45.403 Write Zeroes Command: Supported 00:13:45.403 Set Features Save Field: Not Supported 00:13:45.403 Reservations: Not Supported 00:13:45.403 Timestamp: Not Supported 00:13:45.403 Copy: Supported 00:13:45.403 Volatile Write Cache: Present 00:13:45.403 Atomic Write Unit (Normal): 1 00:13:45.403 Atomic Write Unit (PFail): 1 00:13:45.403 Atomic Compare & Write Unit: 1 00:13:45.403 Fused Compare & Write: Supported 00:13:45.403 Scatter-Gather List 00:13:45.403 SGL Command Set: Supported (Dword aligned) 00:13:45.403 SGL Keyed: Not Supported 00:13:45.403 SGL Bit Bucket Descriptor: Not Supported 00:13:45.403 SGL Metadata Pointer: Not Supported 00:13:45.403 Oversized SGL: Not Supported 00:13:45.403 SGL Metadata Address: Not Supported 00:13:45.403 SGL Offset: Not Supported 00:13:45.403 Transport SGL Data Block: Not Supported 00:13:45.403 Replay Protected Memory Block: Not Supported 00:13:45.403 00:13:45.403 Firmware Slot Information 00:13:45.403 ========================= 00:13:45.403 Active slot: 1 00:13:45.403 Slot 1 Firmware Revision: 25.01 00:13:45.403 00:13:45.403 00:13:45.403 Commands Supported and Effects 00:13:45.403 ============================== 00:13:45.403 Admin Commands 00:13:45.403 -------------- 00:13:45.403 Get Log Page (02h): Supported 00:13:45.403 Identify (06h): Supported 00:13:45.403 Abort (08h): Supported 00:13:45.403 Set Features (09h): Supported 00:13:45.403 Get Features (0Ah): Supported 00:13:45.403 Asynchronous Event Request (0Ch): Supported 00:13:45.403 Keep Alive (18h): Supported 00:13:45.403 I/O Commands 00:13:45.403 ------------ 00:13:45.403 Flush (00h): Supported LBA-Change 00:13:45.403 Write (01h): Supported LBA-Change 00:13:45.403 Read (02h): Supported 00:13:45.403 Compare (05h): Supported 00:13:45.403 Write Zeroes (08h): Supported LBA-Change 00:13:45.403 Dataset Management (09h): Supported LBA-Change 00:13:45.403 Copy (19h): Supported LBA-Change 00:13:45.403 00:13:45.403 Error Log 00:13:45.403 ========= 00:13:45.403 00:13:45.403 Arbitration 00:13:45.403 =========== 00:13:45.403 Arbitration Burst: 1 00:13:45.403 00:13:45.403 Power Management 00:13:45.403 ================ 00:13:45.403 Number of Power States: 1 00:13:45.403 Current Power State: Power State #0 00:13:45.403 Power State #0: 00:13:45.403 Max Power: 0.00 W 00:13:45.403 Non-Operational State: Operational 00:13:45.403 Entry Latency: Not Reported 00:13:45.403 Exit Latency: Not Reported 00:13:45.403 Relative Read Throughput: 0 00:13:45.403 Relative Read Latency: 0 00:13:45.403 Relative Write Throughput: 0 00:13:45.403 Relative Write Latency: 0 00:13:45.403 Idle Power: Not Reported 00:13:45.403 Active Power: Not Reported 00:13:45.403 Non-Operational Permissive Mode: Not Supported 00:13:45.403 00:13:45.403 Health Information 00:13:45.403 ================== 00:13:45.403 Critical Warnings: 00:13:45.403 Available Spare Space: OK 00:13:45.403 Temperature: OK 00:13:45.403 Device Reliability: OK 00:13:45.403 Read Only: No 00:13:45.403 Volatile Memory Backup: OK 00:13:45.403 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:45.403 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:45.403 Available Spare: 0% 00:13:45.403 Available Sp[2024-11-07 10:42:12.980555] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:45.403 [2024-11-07 10:42:12.988439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:45.403 [2024-11-07 10:42:12.988468] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:45.403 [2024-11-07 10:42:12.988477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.403 [2024-11-07 10:42:12.988485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.403 [2024-11-07 10:42:12.988491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.403 [2024-11-07 10:42:12.988496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:45.403 [2024-11-07 10:42:12.988546] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:45.403 [2024-11-07 10:42:12.988557] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:45.403 [2024-11-07 10:42:12.989546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.403 [2024-11-07 10:42:12.989590] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:45.403 [2024-11-07 10:42:12.989596] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:45.403 [2024-11-07 10:42:12.990554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:45.403 [2024-11-07 10:42:12.990566] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:45.403 [2024-11-07 10:42:12.990611] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:45.403 [2024-11-07 10:42:12.993440] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:45.403 are Threshold: 0% 00:13:45.403 Life Percentage Used: 0% 00:13:45.403 Data Units Read: 0 00:13:45.403 Data Units Written: 0 00:13:45.403 Host Read Commands: 0 00:13:45.403 Host Write Commands: 0 00:13:45.403 Controller Busy Time: 0 minutes 00:13:45.403 Power Cycles: 0 00:13:45.403 Power On Hours: 0 hours 00:13:45.403 Unsafe Shutdowns: 0 00:13:45.403 Unrecoverable Media Errors: 0 00:13:45.403 Lifetime Error Log Entries: 0 00:13:45.403 Warning Temperature Time: 0 minutes 00:13:45.403 Critical Temperature Time: 0 minutes 00:13:45.403 00:13:45.403 Number of Queues 00:13:45.403 ================ 00:13:45.403 Number of I/O Submission Queues: 127 00:13:45.403 Number of I/O Completion Queues: 127 00:13:45.403 00:13:45.403 Active Namespaces 00:13:45.403 ================= 00:13:45.403 Namespace ID:1 00:13:45.403 Error Recovery Timeout: Unlimited 00:13:45.403 Command Set Identifier: NVM (00h) 00:13:45.403 Deallocate: Supported 00:13:45.403 Deallocated/Unwritten Error: Not Supported 00:13:45.403 Deallocated Read Value: Unknown 00:13:45.403 Deallocate in Write Zeroes: Not Supported 00:13:45.403 Deallocated Guard Field: 0xFFFF 00:13:45.403 Flush: Supported 00:13:45.403 Reservation: Supported 00:13:45.403 Namespace Sharing Capabilities: Multiple Controllers 00:13:45.403 Size (in LBAs): 131072 (0GiB) 00:13:45.403 Capacity (in LBAs): 131072 (0GiB) 00:13:45.403 Utilization (in LBAs): 131072 (0GiB) 00:13:45.403 NGUID: BE3C0F066B774B4CA84DF7649A542125 00:13:45.403 UUID: be3c0f06-6b77-4b4c-a84d-f7649a542125 00:13:45.404 Thin Provisioning: Not Supported 00:13:45.404 Per-NS Atomic Units: Yes 00:13:45.404 Atomic Boundary Size (Normal): 0 00:13:45.404 Atomic Boundary Size (PFail): 0 00:13:45.404 Atomic Boundary Offset: 0 00:13:45.404 Maximum Single Source Range Length: 65535 00:13:45.404 Maximum Copy Length: 65535 00:13:45.404 Maximum Source Range Count: 1 00:13:45.404 NGUID/EUI64 Never Reused: No 00:13:45.404 Namespace Write Protected: No 00:13:45.404 Number of LBA Formats: 1 00:13:45.404 Current LBA Format: LBA Format #00 00:13:45.404 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:45.404 00:13:45.404 10:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:45.661 [2024-11-07 10:42:13.221873] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.922 Initializing NVMe Controllers 00:13:50.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:50.922 Initialization complete. Launching workers. 00:13:50.922 ======================================================== 00:13:50.922 Latency(us) 00:13:50.922 Device Information : IOPS MiB/s Average min max 00:13:50.922 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.01 156.04 3203.84 976.22 6619.94 00:13:50.922 ======================================================== 00:13:50.922 Total : 39947.01 156.04 3203.84 976.22 6619.94 00:13:50.922 00:13:50.922 [2024-11-07 10:42:18.326696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.922 10:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:50.922 [2024-11-07 10:42:18.570431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.184 Initializing NVMe Controllers 00:13:56.184 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:56.184 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:56.184 Initialization complete. Launching workers. 00:13:56.184 ======================================================== 00:13:56.184 Latency(us) 00:13:56.184 Device Information : IOPS MiB/s Average min max 00:13:56.184 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39940.89 156.02 3204.56 992.41 7581.06 00:13:56.184 ======================================================== 00:13:56.184 Total : 39940.89 156.02 3204.56 992.41 7581.06 00:13:56.184 00:13:56.184 [2024-11-07 10:42:23.595130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.184 10:42:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:56.184 [2024-11-07 10:42:23.796555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.452 [2024-11-07 10:42:28.938526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.452 Initializing NVMe Controllers 00:14:01.452 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.452 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:01.452 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:01.452 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:01.452 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:01.452 Initialization complete. Launching workers. 00:14:01.452 Starting thread on core 2 00:14:01.452 Starting thread on core 3 00:14:01.452 Starting thread on core 1 00:14:01.452 10:42:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:01.710 [2024-11-07 10:42:29.239874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.053 [2024-11-07 10:42:32.297759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.053 Initializing NVMe Controllers 00:14:05.053 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:05.053 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:05.053 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:05.053 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:05.053 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:05.053 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:05.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:05.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:05.053 Initialization complete. Launching workers. 00:14:05.053 Starting thread on core 1 with urgent priority queue 00:14:05.053 Starting thread on core 2 with urgent priority queue 00:14:05.053 Starting thread on core 3 with urgent priority queue 00:14:05.053 Starting thread on core 0 with urgent priority queue 00:14:05.053 SPDK bdev Controller (SPDK2 ) core 0: 8854.33 IO/s 11.29 secs/100000 ios 00:14:05.053 SPDK bdev Controller (SPDK2 ) core 1: 8131.00 IO/s 12.30 secs/100000 ios 00:14:05.053 SPDK bdev Controller (SPDK2 ) core 2: 8510.67 IO/s 11.75 secs/100000 ios 00:14:05.053 SPDK bdev Controller (SPDK2 ) core 3: 7878.33 IO/s 12.69 secs/100000 ios 00:14:05.053 ======================================================== 00:14:05.053 00:14:05.053 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:05.053 [2024-11-07 10:42:32.583695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:05.053 Initializing NVMe Controllers 00:14:05.053 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:05.053 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:05.053 Namespace ID: 1 size: 0GB 00:14:05.053 Initialization complete. 00:14:05.053 INFO: using host memory buffer for IO 00:14:05.053 Hello world! 00:14:05.053 [2024-11-07 10:42:32.595763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:05.053 10:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:05.311 [2024-11-07 10:42:32.874169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.686 Initializing NVMe Controllers 00:14:06.686 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.686 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:06.686 Initialization complete. Launching workers. 00:14:06.686 submit (in ns) avg, min, max = 7115.7, 3216.5, 4001696.5 00:14:06.686 complete (in ns) avg, min, max = 20150.1, 1765.2, 4174944.3 00:14:06.686 00:14:06.686 Submit histogram 00:14:06.686 ================ 00:14:06.686 Range in us Cumulative Count 00:14:06.686 3.214 - 3.228: 0.0061% ( 1) 00:14:06.686 3.242 - 3.256: 0.0244% ( 3) 00:14:06.686 3.256 - 3.270: 0.0366% ( 2) 00:14:06.686 3.270 - 3.283: 0.2502% ( 35) 00:14:06.686 3.283 - 3.297: 2.0569% ( 296) 00:14:06.686 3.297 - 3.311: 5.6519% ( 589) 00:14:06.686 3.311 - 3.325: 10.6445% ( 818) 00:14:06.686 3.325 - 3.339: 16.6138% ( 978) 00:14:06.686 3.339 - 3.353: 22.4976% ( 964) 00:14:06.686 3.353 - 3.367: 28.4180% ( 970) 00:14:06.686 3.367 - 3.381: 33.6975% ( 865) 00:14:06.686 3.381 - 3.395: 39.3311% ( 923) 00:14:06.686 3.395 - 3.409: 43.9514% ( 757) 00:14:06.686 3.409 - 3.423: 48.0530% ( 672) 00:14:06.686 3.423 - 3.437: 51.8066% ( 615) 00:14:06.686 3.437 - 3.450: 57.3181% ( 903) 00:14:06.686 3.450 - 3.464: 63.8550% ( 1071) 00:14:06.686 3.464 - 3.478: 68.9575% ( 836) 00:14:06.686 3.478 - 3.492: 74.2737% ( 871) 00:14:06.686 3.492 - 3.506: 78.8025% ( 742) 00:14:06.686 3.506 - 3.520: 82.2693% ( 568) 00:14:06.686 3.520 - 3.534: 84.7778% ( 411) 00:14:06.686 3.534 - 3.548: 86.1572% ( 226) 00:14:06.686 3.548 - 3.562: 86.9812% ( 135) 00:14:06.686 3.562 - 3.590: 87.7991% ( 134) 00:14:06.686 3.590 - 3.617: 89.1968% ( 229) 00:14:06.686 3.617 - 3.645: 90.9424% ( 286) 00:14:06.686 3.645 - 3.673: 92.6819% ( 285) 00:14:06.686 3.673 - 3.701: 94.2871% ( 263) 00:14:06.686 3.701 - 3.729: 96.1304% ( 302) 00:14:06.686 3.729 - 3.757: 97.4304% ( 213) 00:14:06.686 3.757 - 3.784: 98.4436% ( 166) 00:14:06.686 3.784 - 3.812: 98.9685% ( 86) 00:14:06.686 3.812 - 3.840: 99.3408% ( 61) 00:14:06.686 3.840 - 3.868: 99.4873% ( 24) 00:14:06.686 3.868 - 3.896: 99.5544% ( 11) 00:14:06.686 3.896 - 3.923: 99.5728% ( 3) 00:14:06.686 3.923 - 3.951: 99.5850% ( 2) 00:14:06.686 5.231 - 5.259: 99.5911% ( 1) 00:14:06.686 5.315 - 5.343: 99.6033% ( 2) 00:14:06.686 5.343 - 5.370: 99.6094% ( 1) 00:14:06.686 5.370 - 5.398: 99.6155% ( 1) 00:14:06.686 5.398 - 5.426: 99.6338% ( 3) 00:14:06.686 5.426 - 5.454: 99.6399% ( 1) 00:14:06.686 5.454 - 5.482: 99.6460% ( 1) 00:14:06.686 5.482 - 5.510: 99.6521% ( 1) 00:14:06.686 5.565 - 5.593: 99.6582% ( 1) 00:14:06.686 5.621 - 5.649: 99.6643% ( 1) 00:14:06.686 5.677 - 5.704: 99.6704% ( 1) 00:14:06.686 5.899 - 5.927: 99.6765% ( 1) 00:14:06.686 5.955 - 5.983: 99.6826% ( 1) 00:14:06.686 6.094 - 6.122: 99.6887% ( 1) 00:14:06.686 6.261 - 6.289: 99.6948% ( 1) 00:14:06.686 6.344 - 6.372: 99.7009% ( 1) 00:14:06.686 6.400 - 6.428: 99.7131% ( 2) 00:14:06.686 6.650 - 6.678: 99.7192% ( 1) 00:14:06.686 6.790 - 6.817: 99.7314% ( 2) 00:14:06.686 7.012 - 7.040: 99.7375% ( 1) 00:14:06.686 7.040 - 7.068: 99.7498% ( 2) 00:14:06.686 7.096 - 7.123: 99.7559% ( 1) 00:14:06.686 7.123 - 7.179: 99.7681% ( 2) 00:14:06.686 7.457 - 7.513: 99.7803% ( 2) 00:14:06.686 7.513 - 7.569: 99.7864% ( 1) 00:14:06.686 7.569 - 7.624: 99.7925% ( 1) 00:14:06.686 7.624 - 7.680: 99.7986% ( 1) 00:14:06.686 7.680 - 7.736: 99.8047% ( 1) 00:14:06.686 7.736 - 7.791: 99.8108% ( 1) 00:14:06.686 7.847 - 7.903: 99.8169% ( 1) 00:14:06.686 7.903 - 7.958: 99.8230% ( 1) 00:14:06.686 8.014 - 8.070: 99.8291% ( 1) 00:14:06.686 8.070 - 8.125: 99.8352% ( 1) 00:14:06.686 8.181 - 8.237: 99.8413% ( 1) 00:14:06.686 8.292 - 8.348: 99.8474% ( 1) 00:14:06.686 8.403 - 8.459: 99.8535% ( 1) 00:14:06.686 8.459 - 8.515: 99.8596% ( 1) 00:14:06.686 8.570 - 8.626: 99.8718% ( 2) 00:14:06.686 8.849 - 8.904: 99.8779% ( 1) 00:14:06.686 9.016 - 9.071: 99.8840% ( 1) 00:14:06.686 9.071 - 9.127: 99.8962% ( 2) 00:14:06.686 [2024-11-07 10:42:33.965468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:06.686 9.238 - 9.294: 99.9023% ( 1) 00:14:06.686 19.256 - 19.367: 99.9084% ( 1) 00:14:06.686 3989.148 - 4017.642: 100.0000% ( 15) 00:14:06.686 00:14:06.686 Complete histogram 00:14:06.686 ================== 00:14:06.686 Range in us Cumulative Count 00:14:06.686 1.760 - 1.767: 0.0122% ( 2) 00:14:06.686 1.767 - 1.774: 0.0366% ( 4) 00:14:06.686 1.774 - 1.781: 0.0732% ( 6) 00:14:06.686 1.781 - 1.795: 0.1404% ( 11) 00:14:06.686 1.795 - 1.809: 0.1465% ( 1) 00:14:06.686 1.809 - 1.823: 3.2227% ( 504) 00:14:06.686 1.823 - 1.837: 56.6345% ( 8751) 00:14:06.686 1.837 - 1.850: 88.5559% ( 5230) 00:14:06.686 1.850 - 1.864: 93.7866% ( 857) 00:14:06.686 1.864 - 1.878: 95.2881% ( 246) 00:14:06.686 1.878 - 1.892: 96.2036% ( 150) 00:14:06.686 1.892 - 1.906: 97.4304% ( 201) 00:14:06.686 1.906 - 1.920: 98.5168% ( 178) 00:14:06.686 1.920 - 1.934: 99.0479% ( 87) 00:14:06.686 1.934 - 1.948: 99.2615% ( 35) 00:14:06.686 1.948 - 1.962: 99.3164% ( 9) 00:14:06.686 1.962 - 1.976: 99.3408% ( 4) 00:14:06.686 1.976 - 1.990: 99.3469% ( 1) 00:14:06.686 1.990 - 2.003: 99.3652% ( 3) 00:14:06.686 2.017 - 2.031: 99.3713% ( 1) 00:14:06.686 2.031 - 2.045: 99.3774% ( 1) 00:14:06.686 2.045 - 2.059: 99.3835% ( 1) 00:14:06.686 2.073 - 2.087: 99.3958% ( 2) 00:14:06.686 3.951 - 3.979: 99.4080% ( 2) 00:14:06.686 4.146 - 4.174: 99.4141% ( 1) 00:14:06.686 4.230 - 4.257: 99.4202% ( 1) 00:14:06.686 4.842 - 4.870: 99.4263% ( 1) 00:14:06.686 5.037 - 5.064: 99.4385% ( 2) 00:14:06.686 5.203 - 5.231: 99.4446% ( 1) 00:14:06.686 5.287 - 5.315: 99.4507% ( 1) 00:14:06.686 5.343 - 5.370: 99.4568% ( 1) 00:14:06.686 5.426 - 5.454: 99.4629% ( 1) 00:14:06.686 5.482 - 5.510: 99.4690% ( 1) 00:14:06.686 5.649 - 5.677: 99.4751% ( 1) 00:14:06.686 6.177 - 6.205: 99.4812% ( 1) 00:14:06.686 6.400 - 6.428: 99.4934% ( 2) 00:14:06.686 6.511 - 6.539: 99.4995% ( 1) 00:14:06.686 6.817 - 6.845: 99.5056% ( 1) 00:14:06.686 7.012 - 7.040: 99.5117% ( 1) 00:14:06.686 7.847 - 7.903: 99.5178% ( 1) 00:14:06.686 8.403 - 8.459: 99.5239% ( 1) 00:14:06.686 8.960 - 9.016: 99.5300% ( 1) 00:14:06.686 14.581 - 14.692: 99.5361% ( 1) 00:14:06.686 40.292 - 40.515: 99.5422% ( 1) 00:14:06.686 3989.148 - 4017.642: 99.9817% ( 72) 00:14:06.686 4017.642 - 4046.136: 99.9939% ( 2) 00:14:06.686 4160.111 - 4188.605: 100.0000% ( 1) 00:14:06.686 00:14:06.686 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:06.686 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:06.686 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:06.686 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:06.686 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:06.686 [ 00:14:06.686 { 00:14:06.686 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:06.686 "subtype": "Discovery", 00:14:06.686 "listen_addresses": [], 00:14:06.686 "allow_any_host": true, 00:14:06.686 "hosts": [] 00:14:06.686 }, 00:14:06.686 { 00:14:06.686 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:06.686 "subtype": "NVMe", 00:14:06.686 "listen_addresses": [ 00:14:06.686 { 00:14:06.686 "trtype": "VFIOUSER", 00:14:06.686 "adrfam": "IPv4", 00:14:06.686 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:06.686 "trsvcid": "0" 00:14:06.686 } 00:14:06.686 ], 00:14:06.686 "allow_any_host": true, 00:14:06.686 "hosts": [], 00:14:06.686 "serial_number": "SPDK1", 00:14:06.686 "model_number": "SPDK bdev Controller", 00:14:06.686 "max_namespaces": 32, 00:14:06.686 "min_cntlid": 1, 00:14:06.686 "max_cntlid": 65519, 00:14:06.686 "namespaces": [ 00:14:06.686 { 00:14:06.686 "nsid": 1, 00:14:06.686 "bdev_name": "Malloc1", 00:14:06.686 "name": "Malloc1", 00:14:06.687 "nguid": "E37C8D5358EE4F399BACE38A1E55080E", 00:14:06.687 "uuid": "e37c8d53-58ee-4f39-9bac-e38a1e55080e" 00:14:06.687 }, 00:14:06.687 { 00:14:06.687 "nsid": 2, 00:14:06.687 "bdev_name": "Malloc3", 00:14:06.687 "name": "Malloc3", 00:14:06.687 "nguid": "C3753B15DFF442C6A892499D05E00A35", 00:14:06.687 "uuid": "c3753b15-dff4-42c6-a892-499d05e00a35" 00:14:06.687 } 00:14:06.687 ] 00:14:06.687 }, 00:14:06.687 { 00:14:06.687 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:06.687 "subtype": "NVMe", 00:14:06.687 "listen_addresses": [ 00:14:06.687 { 00:14:06.687 "trtype": "VFIOUSER", 00:14:06.687 "adrfam": "IPv4", 00:14:06.687 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:06.687 "trsvcid": "0" 00:14:06.687 } 00:14:06.687 ], 00:14:06.687 "allow_any_host": true, 00:14:06.687 "hosts": [], 00:14:06.687 "serial_number": "SPDK2", 00:14:06.687 "model_number": "SPDK bdev Controller", 00:14:06.687 "max_namespaces": 32, 00:14:06.687 "min_cntlid": 1, 00:14:06.687 "max_cntlid": 65519, 00:14:06.687 "namespaces": [ 00:14:06.687 { 00:14:06.687 "nsid": 1, 00:14:06.687 "bdev_name": "Malloc2", 00:14:06.687 "name": "Malloc2", 00:14:06.687 "nguid": "BE3C0F066B774B4CA84DF7649A542125", 00:14:06.687 "uuid": "be3c0f06-6b77-4b4c-a84d-f7649a542125" 00:14:06.687 } 00:14:06.687 ] 00:14:06.687 } 00:14:06.687 ] 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2653935 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:06.687 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:06.945 [2024-11-07 10:42:34.360900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:06.945 Malloc4 00:14:06.945 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:07.203 [2024-11-07 10:42:34.626945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.203 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:07.203 Asynchronous Event Request test 00:14:07.203 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.203 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:07.203 Registering asynchronous event callbacks... 00:14:07.203 Starting namespace attribute notice tests for all controllers... 00:14:07.204 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:07.204 aer_cb - Changed Namespace 00:14:07.204 Cleaning up... 00:14:07.204 [ 00:14:07.204 { 00:14:07.204 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.204 "subtype": "Discovery", 00:14:07.204 "listen_addresses": [], 00:14:07.204 "allow_any_host": true, 00:14:07.204 "hosts": [] 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:07.204 "subtype": "NVMe", 00:14:07.204 "listen_addresses": [ 00:14:07.204 { 00:14:07.204 "trtype": "VFIOUSER", 00:14:07.204 "adrfam": "IPv4", 00:14:07.204 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:07.204 "trsvcid": "0" 00:14:07.204 } 00:14:07.204 ], 00:14:07.204 "allow_any_host": true, 00:14:07.204 "hosts": [], 00:14:07.204 "serial_number": "SPDK1", 00:14:07.204 "model_number": "SPDK bdev Controller", 00:14:07.204 "max_namespaces": 32, 00:14:07.204 "min_cntlid": 1, 00:14:07.204 "max_cntlid": 65519, 00:14:07.204 "namespaces": [ 00:14:07.204 { 00:14:07.204 "nsid": 1, 00:14:07.204 "bdev_name": "Malloc1", 00:14:07.204 "name": "Malloc1", 00:14:07.204 "nguid": "E37C8D5358EE4F399BACE38A1E55080E", 00:14:07.204 "uuid": "e37c8d53-58ee-4f39-9bac-e38a1e55080e" 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "nsid": 2, 00:14:07.204 "bdev_name": "Malloc3", 00:14:07.204 "name": "Malloc3", 00:14:07.204 "nguid": "C3753B15DFF442C6A892499D05E00A35", 00:14:07.204 "uuid": "c3753b15-dff4-42c6-a892-499d05e00a35" 00:14:07.204 } 00:14:07.204 ] 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:07.204 "subtype": "NVMe", 00:14:07.204 "listen_addresses": [ 00:14:07.204 { 00:14:07.204 "trtype": "VFIOUSER", 00:14:07.204 "adrfam": "IPv4", 00:14:07.204 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:07.204 "trsvcid": "0" 00:14:07.204 } 00:14:07.204 ], 00:14:07.204 "allow_any_host": true, 00:14:07.204 "hosts": [], 00:14:07.204 "serial_number": "SPDK2", 00:14:07.204 "model_number": "SPDK bdev Controller", 00:14:07.204 "max_namespaces": 32, 00:14:07.204 "min_cntlid": 1, 00:14:07.204 "max_cntlid": 65519, 00:14:07.204 "namespaces": [ 00:14:07.204 { 00:14:07.204 "nsid": 1, 00:14:07.204 "bdev_name": "Malloc2", 00:14:07.204 "name": "Malloc2", 00:14:07.204 "nguid": "BE3C0F066B774B4CA84DF7649A542125", 00:14:07.204 "uuid": "be3c0f06-6b77-4b4c-a84d-f7649a542125" 00:14:07.204 }, 00:14:07.204 { 00:14:07.204 "nsid": 2, 00:14:07.204 "bdev_name": "Malloc4", 00:14:07.204 "name": "Malloc4", 00:14:07.204 "nguid": "1099710064F94B2E9744321907B76F9E", 00:14:07.204 "uuid": "10997100-64f9-4b2e-9744-321907b76f9e" 00:14:07.204 } 00:14:07.204 ] 00:14:07.204 } 00:14:07.204 ] 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2653935 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2645775 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2645775 ']' 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2645775 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.204 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2645775 00:14:07.463 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.463 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.463 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2645775' 00:14:07.463 killing process with pid 2645775 00:14:07.463 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2645775 00:14:07.463 10:42:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2645775 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2654167 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2654167' 00:14:07.722 Process pid: 2654167 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2654167 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2654167 ']' 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.722 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 [2024-11-07 10:42:35.202311] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:07.722 [2024-11-07 10:42:35.203221] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:14:07.722 [2024-11-07 10:42:35.203259] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.722 [2024-11-07 10:42:35.263648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.722 [2024-11-07 10:42:35.306243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.722 [2024-11-07 10:42:35.306282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.722 [2024-11-07 10:42:35.306289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.722 [2024-11-07 10:42:35.306296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.722 [2024-11-07 10:42:35.306301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.722 [2024-11-07 10:42:35.311454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.722 [2024-11-07 10:42:35.311472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.722 [2024-11-07 10:42:35.311556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.722 [2024-11-07 10:42:35.311559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.722 [2024-11-07 10:42:35.379210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:07.722 [2024-11-07 10:42:35.379343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:07.722 [2024-11-07 10:42:35.379454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:07.722 [2024-11-07 10:42:35.379581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:07.722 [2024-11-07 10:42:35.379758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:07.980 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.980 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:07.980 10:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:08.916 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:09.174 Malloc1 00:14:09.174 10:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:09.433 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:09.691 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:09.949 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.949 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:09.949 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:09.949 Malloc2 00:14:10.207 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:10.207 10:42:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:10.465 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:10.723 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:10.723 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2654167 00:14:10.723 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2654167 ']' 00:14:10.723 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2654167 00:14:10.723 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2654167 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2654167' 00:14:10.724 killing process with pid 2654167 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2654167 00:14:10.724 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2654167 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:10.982 00:14:10.982 real 0m50.896s 00:14:10.982 user 3m17.088s 00:14:10.982 sys 0m3.282s 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 ************************************ 00:14:10.982 END TEST nvmf_vfio_user 00:14:10.982 ************************************ 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 ************************************ 00:14:10.982 START TEST nvmf_vfio_user_nvme_compliance 00:14:10.982 ************************************ 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:10.982 * Looking for test storage... 00:14:10.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.982 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:11.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.242 --rc genhtml_branch_coverage=1 00:14:11.242 --rc genhtml_function_coverage=1 00:14:11.242 --rc genhtml_legend=1 00:14:11.242 --rc geninfo_all_blocks=1 00:14:11.242 --rc geninfo_unexecuted_blocks=1 00:14:11.242 00:14:11.242 ' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:11.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.242 --rc genhtml_branch_coverage=1 00:14:11.242 --rc genhtml_function_coverage=1 00:14:11.242 --rc genhtml_legend=1 00:14:11.242 --rc geninfo_all_blocks=1 00:14:11.242 --rc geninfo_unexecuted_blocks=1 00:14:11.242 00:14:11.242 ' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:11.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.242 --rc genhtml_branch_coverage=1 00:14:11.242 --rc genhtml_function_coverage=1 00:14:11.242 --rc genhtml_legend=1 00:14:11.242 --rc geninfo_all_blocks=1 00:14:11.242 --rc geninfo_unexecuted_blocks=1 00:14:11.242 00:14:11.242 ' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:11.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.242 --rc genhtml_branch_coverage=1 00:14:11.242 --rc genhtml_function_coverage=1 00:14:11.242 --rc genhtml_legend=1 00:14:11.242 --rc geninfo_all_blocks=1 00:14:11.242 --rc geninfo_unexecuted_blocks=1 00:14:11.242 00:14:11.242 ' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.242 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2654816 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2654816' 00:14:11.243 Process pid: 2654816 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2654816 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2654816 ']' 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.243 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:11.243 [2024-11-07 10:42:38.741530] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:14:11.243 [2024-11-07 10:42:38.741586] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.243 [2024-11-07 10:42:38.805323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.243 [2024-11-07 10:42:38.847269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.243 [2024-11-07 10:42:38.847310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.243 [2024-11-07 10:42:38.847317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.243 [2024-11-07 10:42:38.847324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.243 [2024-11-07 10:42:38.847329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.243 [2024-11-07 10:42:38.848716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.243 [2024-11-07 10:42:38.848811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.243 [2024-11-07 10:42:38.848813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.501 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.501 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:11.501 10:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:12.447 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:12.447 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:12.447 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:12.447 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 malloc0 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.448 10:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.448 10:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:12.704 00:14:12.704 00:14:12.704 CUnit - A unit testing framework for C - Version 2.1-3 00:14:12.704 http://cunit.sourceforge.net/ 00:14:12.704 00:14:12.704 00:14:12.704 Suite: nvme_compliance 00:14:12.704 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-07 10:42:40.182991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.704 [2024-11-07 10:42:40.184332] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:12.704 [2024-11-07 10:42:40.184349] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:12.704 [2024-11-07 10:42:40.184355] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:12.704 [2024-11-07 10:42:40.188025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.704 passed 00:14:12.704 Test: admin_identify_ctrlr_verify_fused ...[2024-11-07 10:42:40.265579] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.704 [2024-11-07 10:42:40.268601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.704 passed 00:14:12.704 Test: admin_identify_ns ...[2024-11-07 10:42:40.349616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.961 [2024-11-07 10:42:40.409450] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:12.961 [2024-11-07 10:42:40.417443] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:12.961 [2024-11-07 10:42:40.437377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.961 passed 00:14:12.961 Test: admin_get_features_mandatory_features ...[2024-11-07 10:42:40.516650] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.961 [2024-11-07 10:42:40.519668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.961 passed 00:14:12.961 Test: admin_get_features_optional_features ...[2024-11-07 10:42:40.598185] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.961 [2024-11-07 10:42:40.601201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.219 passed 00:14:13.219 Test: admin_set_features_number_of_queues ...[2024-11-07 10:42:40.677970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.219 [2024-11-07 10:42:40.786579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.219 passed 00:14:13.219 Test: admin_get_log_page_mandatory_logs ...[2024-11-07 10:42:40.858881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.219 [2024-11-07 10:42:40.861905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.476 passed 00:14:13.476 Test: admin_get_log_page_with_lpo ...[2024-11-07 10:42:40.940911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.476 [2024-11-07 10:42:41.009449] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:13.476 [2024-11-07 10:42:41.022497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.476 passed 00:14:13.476 Test: fabric_property_get ...[2024-11-07 10:42:41.097577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.476 [2024-11-07 10:42:41.098839] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:13.476 [2024-11-07 10:42:41.100603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.476 passed 00:14:13.734 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-07 10:42:41.181121] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.734 [2024-11-07 10:42:41.182354] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:13.734 [2024-11-07 10:42:41.184138] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.734 passed 00:14:13.734 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-07 10:42:41.260889] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.734 [2024-11-07 10:42:41.344448] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:13.734 [2024-11-07 10:42:41.360443] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:13.734 [2024-11-07 10:42:41.365524] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.734 passed 00:14:13.991 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-07 10:42:41.443390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.991 [2024-11-07 10:42:41.444635] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:13.991 [2024-11-07 10:42:41.449424] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.991 passed 00:14:13.991 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-07 10:42:41.524390] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:13.991 [2024-11-07 10:42:41.599442] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:13.991 [2024-11-07 10:42:41.623452] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:13.991 [2024-11-07 10:42:41.628529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:13.991 passed 00:14:14.249 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-07 10:42:41.707421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.249 [2024-11-07 10:42:41.708667] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:14.249 [2024-11-07 10:42:41.708689] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:14.249 [2024-11-07 10:42:41.710446] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.249 passed 00:14:14.249 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-07 10:42:41.787915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.249 [2024-11-07 10:42:41.883447] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:14.250 [2024-11-07 10:42:41.891439] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:14.250 [2024-11-07 10:42:41.899440] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:14.250 [2024-11-07 10:42:41.907453] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:14.508 [2024-11-07 10:42:41.936526] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.508 passed 00:14:14.508 Test: admin_create_io_sq_verify_pc ...[2024-11-07 10:42:42.011861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:14.508 [2024-11-07 10:42:42.028446] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:14.508 [2024-11-07 10:42:42.046032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:14.508 passed 00:14:14.508 Test: admin_create_io_qp_max_qps ...[2024-11-07 10:42:42.123587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:15.876 [2024-11-07 10:42:43.234443] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:16.131 [2024-11-07 10:42:43.613668] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.131 passed 00:14:16.131 Test: admin_create_io_sq_shared_cq ...[2024-11-07 10:42:43.690830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:16.389 [2024-11-07 10:42:43.823441] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:16.389 [2024-11-07 10:42:43.860500] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:16.389 passed 00:14:16.389 00:14:16.389 Run Summary: Type Total Ran Passed Failed Inactive 00:14:16.389 suites 1 1 n/a 0 0 00:14:16.389 tests 18 18 18 0 0 00:14:16.389 asserts 360 360 360 0 n/a 00:14:16.389 00:14:16.389 Elapsed time = 1.512 seconds 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2654816 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2654816 ']' 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2654816 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2654816 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2654816' 00:14:16.389 killing process with pid 2654816 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2654816 00:14:16.389 10:42:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2654816 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:16.647 00:14:16.647 real 0m5.624s 00:14:16.647 user 0m15.828s 00:14:16.647 sys 0m0.502s 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:16.647 ************************************ 00:14:16.647 END TEST nvmf_vfio_user_nvme_compliance 00:14:16.647 ************************************ 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.647 ************************************ 00:14:16.647 START TEST nvmf_vfio_user_fuzz 00:14:16.647 ************************************ 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:16.647 * Looking for test storage... 00:14:16.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:16.647 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:16.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.905 --rc genhtml_branch_coverage=1 00:14:16.905 --rc genhtml_function_coverage=1 00:14:16.905 --rc genhtml_legend=1 00:14:16.905 --rc geninfo_all_blocks=1 00:14:16.905 --rc geninfo_unexecuted_blocks=1 00:14:16.905 00:14:16.905 ' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:16.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.905 --rc genhtml_branch_coverage=1 00:14:16.905 --rc genhtml_function_coverage=1 00:14:16.905 --rc genhtml_legend=1 00:14:16.905 --rc geninfo_all_blocks=1 00:14:16.905 --rc geninfo_unexecuted_blocks=1 00:14:16.905 00:14:16.905 ' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:16.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.905 --rc genhtml_branch_coverage=1 00:14:16.905 --rc genhtml_function_coverage=1 00:14:16.905 --rc genhtml_legend=1 00:14:16.905 --rc geninfo_all_blocks=1 00:14:16.905 --rc geninfo_unexecuted_blocks=1 00:14:16.905 00:14:16.905 ' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:16.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.905 --rc genhtml_branch_coverage=1 00:14:16.905 --rc genhtml_function_coverage=1 00:14:16.905 --rc genhtml_legend=1 00:14:16.905 --rc geninfo_all_blocks=1 00:14:16.905 --rc geninfo_unexecuted_blocks=1 00:14:16.905 00:14:16.905 ' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:16.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:16.905 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2655871 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2655871' 00:14:16.906 Process pid: 2655871 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2655871 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2655871 ']' 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:16.906 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:17.164 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:17.164 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:14:17.164 10:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 malloc0 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:18.097 10:42:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:50.158 Fuzzing completed. Shutting down the fuzz application 00:14:50.158 00:14:50.158 Dumping successful admin opcodes: 00:14:50.158 8, 9, 10, 24, 00:14:50.158 Dumping successful io opcodes: 00:14:50.158 0, 00:14:50.158 NS: 0x20000081ef00 I/O qp, Total commands completed: 1047728, total successful commands: 4131, random_seed: 4012584512 00:14:50.158 NS: 0x20000081ef00 admin qp, Total commands completed: 259341, total successful commands: 2091, random_seed: 3269271360 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2655871 ']' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2655871' 00:14:50.158 killing process with pid 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2655871 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:50.158 00:14:50.158 real 0m32.137s 00:14:50.158 user 0m30.221s 00:14:50.158 sys 0m31.399s 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:50.158 ************************************ 00:14:50.158 END TEST nvmf_vfio_user_fuzz 00:14:50.158 ************************************ 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.158 ************************************ 00:14:50.158 START TEST nvmf_auth_target 00:14:50.158 ************************************ 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:50.158 * Looking for test storage... 00:14:50.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.158 --rc genhtml_branch_coverage=1 00:14:50.158 --rc genhtml_function_coverage=1 00:14:50.158 --rc genhtml_legend=1 00:14:50.158 --rc geninfo_all_blocks=1 00:14:50.158 --rc geninfo_unexecuted_blocks=1 00:14:50.158 00:14:50.158 ' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.158 --rc genhtml_branch_coverage=1 00:14:50.158 --rc genhtml_function_coverage=1 00:14:50.158 --rc genhtml_legend=1 00:14:50.158 --rc geninfo_all_blocks=1 00:14:50.158 --rc geninfo_unexecuted_blocks=1 00:14:50.158 00:14:50.158 ' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.158 --rc genhtml_branch_coverage=1 00:14:50.158 --rc genhtml_function_coverage=1 00:14:50.158 --rc genhtml_legend=1 00:14:50.158 --rc geninfo_all_blocks=1 00:14:50.158 --rc geninfo_unexecuted_blocks=1 00:14:50.158 00:14:50.158 ' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:50.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.158 --rc genhtml_branch_coverage=1 00:14:50.158 --rc genhtml_function_coverage=1 00:14:50.158 --rc genhtml_legend=1 00:14:50.158 --rc geninfo_all_blocks=1 00:14:50.158 --rc geninfo_unexecuted_blocks=1 00:14:50.158 00:14:50.158 ' 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:50.158 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.159 10:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:54.508 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:54.508 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:54.508 Found net devices under 0000:86:00.0: cvl_0_0 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:54.508 Found net devices under 0000:86:00.1: cvl_0_1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.508 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:54.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:14:54.509 00:14:54.509 --- 10.0.0.2 ping statistics --- 00:14:54.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.509 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:14:54.509 00:14:54.509 --- 10.0.0.1 ping statistics --- 00:14:54.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.509 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2664117 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2664117 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2664117 ']' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:54.509 10:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2664226 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=71ec46ffede1c2cf976e3c6942612a882dd25de0325a0e8c 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YdF 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 71ec46ffede1c2cf976e3c6942612a882dd25de0325a0e8c 0 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 71ec46ffede1c2cf976e3c6942612a882dd25de0325a0e8c 0 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=71ec46ffede1c2cf976e3c6942612a882dd25de0325a0e8c 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YdF 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YdF 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.YdF 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.509 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:54.768 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=df4115a76ff8bfb2932732102996327d3be1710e14a7f19a6b2d932268caa0f0 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lq4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key df4115a76ff8bfb2932732102996327d3be1710e14a7f19a6b2d932268caa0f0 3 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 df4115a76ff8bfb2932732102996327d3be1710e14a7f19a6b2d932268caa0f0 3 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=df4115a76ff8bfb2932732102996327d3be1710e14a7f19a6b2d932268caa0f0 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lq4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lq4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.lq4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3a90f534cc665a1becf5011d54b0e921 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.SPK 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3a90f534cc665a1becf5011d54b0e921 1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3a90f534cc665a1becf5011d54b0e921 1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3a90f534cc665a1becf5011d54b0e921 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.SPK 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.SPK 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.SPK 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6e51983b1bd40fac6a34be7aae4304fd2eb31b4a128f2976 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cXc 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6e51983b1bd40fac6a34be7aae4304fd2eb31b4a128f2976 2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6e51983b1bd40fac6a34be7aae4304fd2eb31b4a128f2976 2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6e51983b1bd40fac6a34be7aae4304fd2eb31b4a128f2976 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cXc 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cXc 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cXc 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=949ae311411ab271ac60eb4d85b3a1b009a0b25216acf051 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5O4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 949ae311411ab271ac60eb4d85b3a1b009a0b25216acf051 2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 949ae311411ab271ac60eb4d85b3a1b009a0b25216acf051 2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=949ae311411ab271ac60eb4d85b3a1b009a0b25216acf051 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5O4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5O4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5O4 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ef8d80b2e987d36b78941ef74c2fc6d1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sx6 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ef8d80b2e987d36b78941ef74c2fc6d1 1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ef8d80b2e987d36b78941ef74c2fc6d1 1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ef8d80b2e987d36b78941ef74c2fc6d1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:54.769 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sx6 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sx6 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.sx6 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f99178daf286d94193369fb9bc66e1ad6a5b788bb3c1dc5bfcdd32da721d9c34 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Q7E 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f99178daf286d94193369fb9bc66e1ad6a5b788bb3c1dc5bfcdd32da721d9c34 3 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f99178daf286d94193369fb9bc66e1ad6a5b788bb3c1dc5bfcdd32da721d9c34 3 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f99178daf286d94193369fb9bc66e1ad6a5b788bb3c1dc5bfcdd32da721d9c34 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Q7E 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Q7E 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Q7E 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2664117 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2664117 ']' 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.028 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2664226 /var/tmp/host.sock 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2664226 ']' 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:55.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YdF 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.YdF 00:14:55.287 10:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.YdF 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.lq4 ]] 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lq4 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lq4 00:14:55.545 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lq4 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SPK 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.SPK 00:14:55.804 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.SPK 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cXc ]] 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cXc 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cXc 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cXc 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5O4 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5O4 00:14:56.063 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5O4 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.sx6 ]] 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sx6 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sx6 00:14:56.322 10:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sx6 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Q7E 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Q7E 00:14:56.580 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Q7E 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.838 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.096 00:14:57.096 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.096 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.096 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.354 { 00:14:57.354 "cntlid": 1, 00:14:57.354 "qid": 0, 00:14:57.354 "state": "enabled", 00:14:57.354 "thread": "nvmf_tgt_poll_group_000", 00:14:57.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:57.354 "listen_address": { 00:14:57.354 "trtype": "TCP", 00:14:57.354 "adrfam": "IPv4", 00:14:57.354 "traddr": "10.0.0.2", 00:14:57.354 "trsvcid": "4420" 00:14:57.354 }, 00:14:57.354 "peer_address": { 00:14:57.354 "trtype": "TCP", 00:14:57.354 "adrfam": "IPv4", 00:14:57.354 "traddr": "10.0.0.1", 00:14:57.354 "trsvcid": "56244" 00:14:57.354 }, 00:14:57.354 "auth": { 00:14:57.354 "state": "completed", 00:14:57.354 "digest": "sha256", 00:14:57.354 "dhgroup": "null" 00:14:57.354 } 00:14:57.354 } 00:14:57.354 ]' 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.354 10:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:14:57.613 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:14:58.179 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.437 10:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.437 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.695 00:14:58.695 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.695 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.695 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.953 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.953 { 00:14:58.953 "cntlid": 3, 00:14:58.953 "qid": 0, 00:14:58.953 "state": "enabled", 00:14:58.953 "thread": "nvmf_tgt_poll_group_000", 00:14:58.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:58.953 "listen_address": { 00:14:58.953 "trtype": "TCP", 00:14:58.953 "adrfam": "IPv4", 00:14:58.953 "traddr": "10.0.0.2", 00:14:58.953 "trsvcid": "4420" 00:14:58.953 }, 00:14:58.953 "peer_address": { 00:14:58.953 "trtype": "TCP", 00:14:58.953 "adrfam": "IPv4", 00:14:58.953 "traddr": "10.0.0.1", 00:14:58.954 "trsvcid": "36862" 00:14:58.954 }, 00:14:58.954 "auth": { 00:14:58.954 "state": "completed", 00:14:58.954 "digest": "sha256", 00:14:58.954 "dhgroup": "null" 00:14:58.954 } 00:14:58.954 } 00:14:58.954 ]' 00:14:58.954 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.954 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.954 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.954 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.954 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.225 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.225 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.225 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.225 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:14:59.225 10:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:59.791 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.049 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:00.307 00:15:00.307 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.307 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.307 10:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.565 { 00:15:00.565 "cntlid": 5, 00:15:00.565 "qid": 0, 00:15:00.565 "state": "enabled", 00:15:00.565 "thread": "nvmf_tgt_poll_group_000", 00:15:00.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:00.565 "listen_address": { 00:15:00.565 "trtype": "TCP", 00:15:00.565 "adrfam": "IPv4", 00:15:00.565 "traddr": "10.0.0.2", 00:15:00.565 "trsvcid": "4420" 00:15:00.565 }, 00:15:00.565 "peer_address": { 00:15:00.565 "trtype": "TCP", 00:15:00.565 "adrfam": "IPv4", 00:15:00.565 "traddr": "10.0.0.1", 00:15:00.565 "trsvcid": "36878" 00:15:00.565 }, 00:15:00.565 "auth": { 00:15:00.565 "state": "completed", 00:15:00.565 "digest": "sha256", 00:15:00.565 "dhgroup": "null" 00:15:00.565 } 00:15:00.565 } 00:15:00.565 ]' 00:15:00.565 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.566 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.824 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:00.824 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:01.390 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.390 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.390 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.390 10:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.390 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.390 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.390 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.390 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.649 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.907 00:15:01.907 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.907 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.907 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.167 { 00:15:02.167 "cntlid": 7, 00:15:02.167 "qid": 0, 00:15:02.167 "state": "enabled", 00:15:02.167 "thread": "nvmf_tgt_poll_group_000", 00:15:02.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.167 "listen_address": { 00:15:02.167 "trtype": "TCP", 00:15:02.167 "adrfam": "IPv4", 00:15:02.167 "traddr": "10.0.0.2", 00:15:02.167 "trsvcid": "4420" 00:15:02.167 }, 00:15:02.167 "peer_address": { 00:15:02.167 "trtype": "TCP", 00:15:02.167 "adrfam": "IPv4", 00:15:02.167 "traddr": "10.0.0.1", 00:15:02.167 "trsvcid": "36896" 00:15:02.167 }, 00:15:02.167 "auth": { 00:15:02.167 "state": "completed", 00:15:02.167 "digest": "sha256", 00:15:02.167 "dhgroup": "null" 00:15:02.167 } 00:15:02.167 } 00:15:02.167 ]' 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.167 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.426 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:02.426 10:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.992 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.250 10:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.508 00:15:03.508 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.508 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.508 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.766 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.766 { 00:15:03.766 "cntlid": 9, 00:15:03.766 "qid": 0, 00:15:03.766 "state": "enabled", 00:15:03.767 "thread": "nvmf_tgt_poll_group_000", 00:15:03.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.767 "listen_address": { 00:15:03.767 "trtype": "TCP", 00:15:03.767 "adrfam": "IPv4", 00:15:03.767 "traddr": "10.0.0.2", 00:15:03.767 "trsvcid": "4420" 00:15:03.767 }, 00:15:03.767 "peer_address": { 00:15:03.767 "trtype": "TCP", 00:15:03.767 "adrfam": "IPv4", 00:15:03.767 "traddr": "10.0.0.1", 00:15:03.767 "trsvcid": "36918" 00:15:03.767 }, 00:15:03.767 "auth": { 00:15:03.767 "state": "completed", 00:15:03.767 "digest": "sha256", 00:15:03.767 "dhgroup": "ffdhe2048" 00:15:03.767 } 00:15:03.767 } 00:15:03.767 ]' 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.767 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.025 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:04.025 10:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:04.593 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.851 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.109 00:15:05.109 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.109 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.109 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.367 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.367 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.367 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.368 { 00:15:05.368 "cntlid": 11, 00:15:05.368 "qid": 0, 00:15:05.368 "state": "enabled", 00:15:05.368 "thread": "nvmf_tgt_poll_group_000", 00:15:05.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.368 "listen_address": { 00:15:05.368 "trtype": "TCP", 00:15:05.368 "adrfam": "IPv4", 00:15:05.368 "traddr": "10.0.0.2", 00:15:05.368 "trsvcid": "4420" 00:15:05.368 }, 00:15:05.368 "peer_address": { 00:15:05.368 "trtype": "TCP", 00:15:05.368 "adrfam": "IPv4", 00:15:05.368 "traddr": "10.0.0.1", 00:15:05.368 "trsvcid": "36954" 00:15:05.368 }, 00:15:05.368 "auth": { 00:15:05.368 "state": "completed", 00:15:05.368 "digest": "sha256", 00:15:05.368 "dhgroup": "ffdhe2048" 00:15:05.368 } 00:15:05.368 } 00:15:05.368 ]' 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.368 10:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.626 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:05.626 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:06.192 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.450 10:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.450 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.451 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.709 00:15:06.709 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.709 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.709 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.967 { 00:15:06.967 "cntlid": 13, 00:15:06.967 "qid": 0, 00:15:06.967 "state": "enabled", 00:15:06.967 "thread": "nvmf_tgt_poll_group_000", 00:15:06.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:06.967 "listen_address": { 00:15:06.967 "trtype": "TCP", 00:15:06.967 "adrfam": "IPv4", 00:15:06.967 "traddr": "10.0.0.2", 00:15:06.967 "trsvcid": "4420" 00:15:06.967 }, 00:15:06.967 "peer_address": { 00:15:06.967 "trtype": "TCP", 00:15:06.967 "adrfam": "IPv4", 00:15:06.967 "traddr": "10.0.0.1", 00:15:06.967 "trsvcid": "36992" 00:15:06.967 }, 00:15:06.967 "auth": { 00:15:06.967 "state": "completed", 00:15:06.967 "digest": "sha256", 00:15:06.967 "dhgroup": "ffdhe2048" 00:15:06.967 } 00:15:06.967 } 00:15:06.967 ]' 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:06.967 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.225 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.225 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.225 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.225 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:07.225 10:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:07.791 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.791 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.791 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.791 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.050 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.308 00:15:08.308 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.308 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.308 10:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.566 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.566 { 00:15:08.566 "cntlid": 15, 00:15:08.566 "qid": 0, 00:15:08.566 "state": "enabled", 00:15:08.566 "thread": "nvmf_tgt_poll_group_000", 00:15:08.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.566 "listen_address": { 00:15:08.567 "trtype": "TCP", 00:15:08.567 "adrfam": "IPv4", 00:15:08.567 "traddr": "10.0.0.2", 00:15:08.567 "trsvcid": "4420" 00:15:08.567 }, 00:15:08.567 "peer_address": { 00:15:08.567 "trtype": "TCP", 00:15:08.567 "adrfam": "IPv4", 00:15:08.567 "traddr": "10.0.0.1", 00:15:08.567 "trsvcid": "48260" 00:15:08.567 }, 00:15:08.567 "auth": { 00:15:08.567 "state": "completed", 00:15:08.567 "digest": "sha256", 00:15:08.567 "dhgroup": "ffdhe2048" 00:15:08.567 } 00:15:08.567 } 00:15:08.567 ]' 00:15:08.567 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.567 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.567 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.567 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:08.567 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.825 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.825 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.825 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.825 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:08.825 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:09.391 10:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.391 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.649 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.908 00:15:09.908 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.908 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.908 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.166 { 00:15:10.166 "cntlid": 17, 00:15:10.166 "qid": 0, 00:15:10.166 "state": "enabled", 00:15:10.166 "thread": "nvmf_tgt_poll_group_000", 00:15:10.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.166 "listen_address": { 00:15:10.166 "trtype": "TCP", 00:15:10.166 "adrfam": "IPv4", 00:15:10.166 "traddr": "10.0.0.2", 00:15:10.166 "trsvcid": "4420" 00:15:10.166 }, 00:15:10.166 "peer_address": { 00:15:10.166 "trtype": "TCP", 00:15:10.166 "adrfam": "IPv4", 00:15:10.166 "traddr": "10.0.0.1", 00:15:10.166 "trsvcid": "48296" 00:15:10.166 }, 00:15:10.166 "auth": { 00:15:10.166 "state": "completed", 00:15:10.166 "digest": "sha256", 00:15:10.166 "dhgroup": "ffdhe3072" 00:15:10.166 } 00:15:10.166 } 00:15:10.166 ]' 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.166 10:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.424 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:10.424 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:10.990 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.248 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.249 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.249 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.249 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.249 10:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.507 00:15:11.507 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.507 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.507 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.764 { 00:15:11.764 "cntlid": 19, 00:15:11.764 "qid": 0, 00:15:11.764 "state": "enabled", 00:15:11.764 "thread": "nvmf_tgt_poll_group_000", 00:15:11.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.764 "listen_address": { 00:15:11.764 "trtype": "TCP", 00:15:11.764 "adrfam": "IPv4", 00:15:11.764 "traddr": "10.0.0.2", 00:15:11.764 "trsvcid": "4420" 00:15:11.764 }, 00:15:11.764 "peer_address": { 00:15:11.764 "trtype": "TCP", 00:15:11.764 "adrfam": "IPv4", 00:15:11.764 "traddr": "10.0.0.1", 00:15:11.764 "trsvcid": "48328" 00:15:11.764 }, 00:15:11.764 "auth": { 00:15:11.764 "state": "completed", 00:15:11.764 "digest": "sha256", 00:15:11.764 "dhgroup": "ffdhe3072" 00:15:11.764 } 00:15:11.764 } 00:15:11.764 ]' 00:15:11.764 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.765 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.022 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:12.022 10:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:12.587 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.845 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.104 00:15:13.104 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.104 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.104 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.362 { 00:15:13.362 "cntlid": 21, 00:15:13.362 "qid": 0, 00:15:13.362 "state": "enabled", 00:15:13.362 "thread": "nvmf_tgt_poll_group_000", 00:15:13.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.362 "listen_address": { 00:15:13.362 "trtype": "TCP", 00:15:13.362 "adrfam": "IPv4", 00:15:13.362 "traddr": "10.0.0.2", 00:15:13.362 "trsvcid": "4420" 00:15:13.362 }, 00:15:13.362 "peer_address": { 00:15:13.362 "trtype": "TCP", 00:15:13.362 "adrfam": "IPv4", 00:15:13.362 "traddr": "10.0.0.1", 00:15:13.362 "trsvcid": "48346" 00:15:13.362 }, 00:15:13.362 "auth": { 00:15:13.362 "state": "completed", 00:15:13.362 "digest": "sha256", 00:15:13.362 "dhgroup": "ffdhe3072" 00:15:13.362 } 00:15:13.362 } 00:15:13.362 ]' 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.362 10:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.620 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:13.621 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.186 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.445 10:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:14.703 00:15:14.703 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.703 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.703 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.961 { 00:15:14.961 "cntlid": 23, 00:15:14.961 "qid": 0, 00:15:14.961 "state": "enabled", 00:15:14.961 "thread": "nvmf_tgt_poll_group_000", 00:15:14.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:14.961 "listen_address": { 00:15:14.961 "trtype": "TCP", 00:15:14.961 "adrfam": "IPv4", 00:15:14.961 "traddr": "10.0.0.2", 00:15:14.961 "trsvcid": "4420" 00:15:14.961 }, 00:15:14.961 "peer_address": { 00:15:14.961 "trtype": "TCP", 00:15:14.961 "adrfam": "IPv4", 00:15:14.961 "traddr": "10.0.0.1", 00:15:14.961 "trsvcid": "48376" 00:15:14.961 }, 00:15:14.961 "auth": { 00:15:14.961 "state": "completed", 00:15:14.961 "digest": "sha256", 00:15:14.961 "dhgroup": "ffdhe3072" 00:15:14.961 } 00:15:14.961 } 00:15:14.961 ]' 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.961 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.219 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:15.219 10:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:15.785 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.044 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.302 00:15:16.302 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.302 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.302 10:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.560 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.560 { 00:15:16.560 "cntlid": 25, 00:15:16.560 "qid": 0, 00:15:16.560 "state": "enabled", 00:15:16.560 "thread": "nvmf_tgt_poll_group_000", 00:15:16.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:16.561 "listen_address": { 00:15:16.561 "trtype": "TCP", 00:15:16.561 "adrfam": "IPv4", 00:15:16.561 "traddr": "10.0.0.2", 00:15:16.561 "trsvcid": "4420" 00:15:16.561 }, 00:15:16.561 "peer_address": { 00:15:16.561 "trtype": "TCP", 00:15:16.561 "adrfam": "IPv4", 00:15:16.561 "traddr": "10.0.0.1", 00:15:16.561 "trsvcid": "48396" 00:15:16.561 }, 00:15:16.561 "auth": { 00:15:16.561 "state": "completed", 00:15:16.561 "digest": "sha256", 00:15:16.561 "dhgroup": "ffdhe4096" 00:15:16.561 } 00:15:16.561 } 00:15:16.561 ]' 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.561 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.819 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:16.819 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:17.385 10:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.643 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.644 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.901 00:15:17.901 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.901 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.901 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.160 { 00:15:18.160 "cntlid": 27, 00:15:18.160 "qid": 0, 00:15:18.160 "state": "enabled", 00:15:18.160 "thread": "nvmf_tgt_poll_group_000", 00:15:18.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.160 "listen_address": { 00:15:18.160 "trtype": "TCP", 00:15:18.160 "adrfam": "IPv4", 00:15:18.160 "traddr": "10.0.0.2", 00:15:18.160 "trsvcid": "4420" 00:15:18.160 }, 00:15:18.160 "peer_address": { 00:15:18.160 "trtype": "TCP", 00:15:18.160 "adrfam": "IPv4", 00:15:18.160 "traddr": "10.0.0.1", 00:15:18.160 "trsvcid": "49456" 00:15:18.160 }, 00:15:18.160 "auth": { 00:15:18.160 "state": "completed", 00:15:18.160 "digest": "sha256", 00:15:18.160 "dhgroup": "ffdhe4096" 00:15:18.160 } 00:15:18.160 } 00:15:18.160 ]' 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.160 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.418 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:18.418 10:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.985 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.243 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.501 00:15:19.501 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.501 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.501 10:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.759 { 00:15:19.759 "cntlid": 29, 00:15:19.759 "qid": 0, 00:15:19.759 "state": "enabled", 00:15:19.759 "thread": "nvmf_tgt_poll_group_000", 00:15:19.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.759 "listen_address": { 00:15:19.759 "trtype": "TCP", 00:15:19.759 "adrfam": "IPv4", 00:15:19.759 "traddr": "10.0.0.2", 00:15:19.759 "trsvcid": "4420" 00:15:19.759 }, 00:15:19.759 "peer_address": { 00:15:19.759 "trtype": "TCP", 00:15:19.759 "adrfam": "IPv4", 00:15:19.759 "traddr": "10.0.0.1", 00:15:19.759 "trsvcid": "49468" 00:15:19.759 }, 00:15:19.759 "auth": { 00:15:19.759 "state": "completed", 00:15:19.759 "digest": "sha256", 00:15:19.759 "dhgroup": "ffdhe4096" 00:15:19.759 } 00:15:19.759 } 00:15:19.759 ]' 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.759 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.017 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:20.017 10:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.584 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.842 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.101 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.101 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.359 { 00:15:21.359 "cntlid": 31, 00:15:21.359 "qid": 0, 00:15:21.359 "state": "enabled", 00:15:21.359 "thread": "nvmf_tgt_poll_group_000", 00:15:21.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.359 "listen_address": { 00:15:21.359 "trtype": "TCP", 00:15:21.359 "adrfam": "IPv4", 00:15:21.359 "traddr": "10.0.0.2", 00:15:21.359 "trsvcid": "4420" 00:15:21.359 }, 00:15:21.359 "peer_address": { 00:15:21.359 "trtype": "TCP", 00:15:21.359 "adrfam": "IPv4", 00:15:21.359 "traddr": "10.0.0.1", 00:15:21.359 "trsvcid": "49490" 00:15:21.359 }, 00:15:21.359 "auth": { 00:15:21.359 "state": "completed", 00:15:21.359 "digest": "sha256", 00:15:21.359 "dhgroup": "ffdhe4096" 00:15:21.359 } 00:15:21.359 } 00:15:21.359 ]' 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.359 10:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.617 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:21.617 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:22.182 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.439 10:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.697 00:15:22.697 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.697 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.697 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.954 { 00:15:22.954 "cntlid": 33, 00:15:22.954 "qid": 0, 00:15:22.954 "state": "enabled", 00:15:22.954 "thread": "nvmf_tgt_poll_group_000", 00:15:22.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.954 "listen_address": { 00:15:22.954 "trtype": "TCP", 00:15:22.954 "adrfam": "IPv4", 00:15:22.954 "traddr": "10.0.0.2", 00:15:22.954 "trsvcid": "4420" 00:15:22.954 }, 00:15:22.954 "peer_address": { 00:15:22.954 "trtype": "TCP", 00:15:22.954 "adrfam": "IPv4", 00:15:22.954 "traddr": "10.0.0.1", 00:15:22.954 "trsvcid": "49526" 00:15:22.954 }, 00:15:22.954 "auth": { 00:15:22.954 "state": "completed", 00:15:22.954 "digest": "sha256", 00:15:22.954 "dhgroup": "ffdhe6144" 00:15:22.954 } 00:15:22.954 } 00:15:22.954 ]' 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.954 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.212 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:23.212 10:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.779 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.037 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.295 00:15:24.295 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.295 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.295 10:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.554 { 00:15:24.554 "cntlid": 35, 00:15:24.554 "qid": 0, 00:15:24.554 "state": "enabled", 00:15:24.554 "thread": "nvmf_tgt_poll_group_000", 00:15:24.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.554 "listen_address": { 00:15:24.554 "trtype": "TCP", 00:15:24.554 "adrfam": "IPv4", 00:15:24.554 "traddr": "10.0.0.2", 00:15:24.554 "trsvcid": "4420" 00:15:24.554 }, 00:15:24.554 "peer_address": { 00:15:24.554 "trtype": "TCP", 00:15:24.554 "adrfam": "IPv4", 00:15:24.554 "traddr": "10.0.0.1", 00:15:24.554 "trsvcid": "49556" 00:15:24.554 }, 00:15:24.554 "auth": { 00:15:24.554 "state": "completed", 00:15:24.554 "digest": "sha256", 00:15:24.554 "dhgroup": "ffdhe6144" 00:15:24.554 } 00:15:24.554 } 00:15:24.554 ]' 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.554 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.812 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:24.812 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.380 10:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.639 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.897 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.154 { 00:15:26.154 "cntlid": 37, 00:15:26.154 "qid": 0, 00:15:26.154 "state": "enabled", 00:15:26.154 "thread": "nvmf_tgt_poll_group_000", 00:15:26.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.154 "listen_address": { 00:15:26.154 "trtype": "TCP", 00:15:26.154 "adrfam": "IPv4", 00:15:26.154 "traddr": "10.0.0.2", 00:15:26.154 "trsvcid": "4420" 00:15:26.154 }, 00:15:26.154 "peer_address": { 00:15:26.154 "trtype": "TCP", 00:15:26.154 "adrfam": "IPv4", 00:15:26.154 "traddr": "10.0.0.1", 00:15:26.154 "trsvcid": "49580" 00:15:26.154 }, 00:15:26.154 "auth": { 00:15:26.154 "state": "completed", 00:15:26.154 "digest": "sha256", 00:15:26.154 "dhgroup": "ffdhe6144" 00:15:26.154 } 00:15:26.154 } 00:15:26.154 ]' 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.154 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.412 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.412 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.412 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.412 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.412 10:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.670 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:26.670 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.236 10:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.803 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.803 { 00:15:27.803 "cntlid": 39, 00:15:27.803 "qid": 0, 00:15:27.803 "state": "enabled", 00:15:27.803 "thread": "nvmf_tgt_poll_group_000", 00:15:27.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.803 "listen_address": { 00:15:27.803 "trtype": "TCP", 00:15:27.803 "adrfam": "IPv4", 00:15:27.803 "traddr": "10.0.0.2", 00:15:27.803 "trsvcid": "4420" 00:15:27.803 }, 00:15:27.803 "peer_address": { 00:15:27.803 "trtype": "TCP", 00:15:27.803 "adrfam": "IPv4", 00:15:27.803 "traddr": "10.0.0.1", 00:15:27.803 "trsvcid": "47768" 00:15:27.803 }, 00:15:27.803 "auth": { 00:15:27.803 "state": "completed", 00:15:27.803 "digest": "sha256", 00:15:27.803 "dhgroup": "ffdhe6144" 00:15:27.803 } 00:15:27.803 } 00:15:27.803 ]' 00:15:27.803 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.061 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.319 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:28.319 10:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.886 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.145 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.145 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.145 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.145 10:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.403 00:15:29.403 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.403 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.403 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.661 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.661 { 00:15:29.661 "cntlid": 41, 00:15:29.661 "qid": 0, 00:15:29.661 "state": "enabled", 00:15:29.661 "thread": "nvmf_tgt_poll_group_000", 00:15:29.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.661 "listen_address": { 00:15:29.661 "trtype": "TCP", 00:15:29.661 "adrfam": "IPv4", 00:15:29.662 "traddr": "10.0.0.2", 00:15:29.662 "trsvcid": "4420" 00:15:29.662 }, 00:15:29.662 "peer_address": { 00:15:29.662 "trtype": "TCP", 00:15:29.662 "adrfam": "IPv4", 00:15:29.662 "traddr": "10.0.0.1", 00:15:29.662 "trsvcid": "47790" 00:15:29.662 }, 00:15:29.662 "auth": { 00:15:29.662 "state": "completed", 00:15:29.662 "digest": "sha256", 00:15:29.662 "dhgroup": "ffdhe8192" 00:15:29.662 } 00:15:29.662 } 00:15:29.662 ]' 00:15:29.662 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.662 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.662 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.920 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.178 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:30.178 10:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.744 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.745 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.311 00:15:31.311 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.311 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.311 10:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.569 { 00:15:31.569 "cntlid": 43, 00:15:31.569 "qid": 0, 00:15:31.569 "state": "enabled", 00:15:31.569 "thread": "nvmf_tgt_poll_group_000", 00:15:31.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.569 "listen_address": { 00:15:31.569 "trtype": "TCP", 00:15:31.569 "adrfam": "IPv4", 00:15:31.569 "traddr": "10.0.0.2", 00:15:31.569 "trsvcid": "4420" 00:15:31.569 }, 00:15:31.569 "peer_address": { 00:15:31.569 "trtype": "TCP", 00:15:31.569 "adrfam": "IPv4", 00:15:31.569 "traddr": "10.0.0.1", 00:15:31.569 "trsvcid": "47824" 00:15:31.569 }, 00:15:31.569 "auth": { 00:15:31.569 "state": "completed", 00:15:31.569 "digest": "sha256", 00:15:31.569 "dhgroup": "ffdhe8192" 00:15:31.569 } 00:15:31.569 } 00:15:31.569 ]' 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.569 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.827 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:31.827 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:32.392 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.392 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.392 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.392 10:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.392 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.392 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.392 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.392 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.651 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.217 00:15:33.217 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.217 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.217 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.475 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.475 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.476 { 00:15:33.476 "cntlid": 45, 00:15:33.476 "qid": 0, 00:15:33.476 "state": "enabled", 00:15:33.476 "thread": "nvmf_tgt_poll_group_000", 00:15:33.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.476 "listen_address": { 00:15:33.476 "trtype": "TCP", 00:15:33.476 "adrfam": "IPv4", 00:15:33.476 "traddr": "10.0.0.2", 00:15:33.476 "trsvcid": "4420" 00:15:33.476 }, 00:15:33.476 "peer_address": { 00:15:33.476 "trtype": "TCP", 00:15:33.476 "adrfam": "IPv4", 00:15:33.476 "traddr": "10.0.0.1", 00:15:33.476 "trsvcid": "47848" 00:15:33.476 }, 00:15:33.476 "auth": { 00:15:33.476 "state": "completed", 00:15:33.476 "digest": "sha256", 00:15:33.476 "dhgroup": "ffdhe8192" 00:15:33.476 } 00:15:33.476 } 00:15:33.476 ]' 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.476 10:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.476 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.476 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.476 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.476 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.476 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.734 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:33.734 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:34.298 10:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:34.555 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:34.555 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.556 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.122 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.122 { 00:15:35.122 "cntlid": 47, 00:15:35.122 "qid": 0, 00:15:35.122 "state": "enabled", 00:15:35.122 "thread": "nvmf_tgt_poll_group_000", 00:15:35.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.122 "listen_address": { 00:15:35.122 "trtype": "TCP", 00:15:35.122 "adrfam": "IPv4", 00:15:35.122 "traddr": "10.0.0.2", 00:15:35.122 "trsvcid": "4420" 00:15:35.122 }, 00:15:35.122 "peer_address": { 00:15:35.122 "trtype": "TCP", 00:15:35.122 "adrfam": "IPv4", 00:15:35.122 "traddr": "10.0.0.1", 00:15:35.122 "trsvcid": "47876" 00:15:35.122 }, 00:15:35.122 "auth": { 00:15:35.122 "state": "completed", 00:15:35.122 "digest": "sha256", 00:15:35.122 "dhgroup": "ffdhe8192" 00:15:35.122 } 00:15:35.122 } 00:15:35.122 ]' 00:15:35.122 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.380 10:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.638 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:35.638 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:36.204 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.205 10:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.463 00:15:36.463 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.463 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.463 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.721 { 00:15:36.721 "cntlid": 49, 00:15:36.721 "qid": 0, 00:15:36.721 "state": "enabled", 00:15:36.721 "thread": "nvmf_tgt_poll_group_000", 00:15:36.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.721 "listen_address": { 00:15:36.721 "trtype": "TCP", 00:15:36.721 "adrfam": "IPv4", 00:15:36.721 "traddr": "10.0.0.2", 00:15:36.721 "trsvcid": "4420" 00:15:36.721 }, 00:15:36.721 "peer_address": { 00:15:36.721 "trtype": "TCP", 00:15:36.721 "adrfam": "IPv4", 00:15:36.721 "traddr": "10.0.0.1", 00:15:36.721 "trsvcid": "47902" 00:15:36.721 }, 00:15:36.721 "auth": { 00:15:36.721 "state": "completed", 00:15:36.721 "digest": "sha384", 00:15:36.721 "dhgroup": "null" 00:15:36.721 } 00:15:36.721 } 00:15:36.721 ]' 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.721 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:36.980 10:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:37.547 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.806 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.064 00:15:38.064 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.064 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.064 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.322 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.322 { 00:15:38.322 "cntlid": 51, 00:15:38.322 "qid": 0, 00:15:38.322 "state": "enabled", 00:15:38.322 "thread": "nvmf_tgt_poll_group_000", 00:15:38.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.322 "listen_address": { 00:15:38.322 "trtype": "TCP", 00:15:38.322 "adrfam": "IPv4", 00:15:38.322 "traddr": "10.0.0.2", 00:15:38.322 "trsvcid": "4420" 00:15:38.322 }, 00:15:38.322 "peer_address": { 00:15:38.322 "trtype": "TCP", 00:15:38.322 "adrfam": "IPv4", 00:15:38.322 "traddr": "10.0.0.1", 00:15:38.322 "trsvcid": "33734" 00:15:38.322 }, 00:15:38.322 "auth": { 00:15:38.322 "state": "completed", 00:15:38.322 "digest": "sha384", 00:15:38.322 "dhgroup": "null" 00:15:38.322 } 00:15:38.322 } 00:15:38.322 ]' 00:15:38.323 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.323 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.323 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.323 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:38.323 10:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.581 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.581 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.581 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.581 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:38.581 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:39.146 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.146 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.146 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.147 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.147 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.147 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.147 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:39.147 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.405 10:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.405 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.405 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.405 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.405 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.663 00:15:39.663 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.663 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.663 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:39.921 { 00:15:39.921 "cntlid": 53, 00:15:39.921 "qid": 0, 00:15:39.921 "state": "enabled", 00:15:39.921 "thread": "nvmf_tgt_poll_group_000", 00:15:39.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:39.921 "listen_address": { 00:15:39.921 "trtype": "TCP", 00:15:39.921 "adrfam": "IPv4", 00:15:39.921 "traddr": "10.0.0.2", 00:15:39.921 "trsvcid": "4420" 00:15:39.921 }, 00:15:39.921 "peer_address": { 00:15:39.921 "trtype": "TCP", 00:15:39.921 "adrfam": "IPv4", 00:15:39.921 "traddr": "10.0.0.1", 00:15:39.921 "trsvcid": "33768" 00:15:39.921 }, 00:15:39.921 "auth": { 00:15:39.921 "state": "completed", 00:15:39.921 "digest": "sha384", 00:15:39.921 "dhgroup": "null" 00:15:39.921 } 00:15:39.921 } 00:15:39.921 ]' 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:39.921 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.179 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.179 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.179 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.180 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:40.180 10:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.746 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.004 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.262 00:15:41.262 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.262 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.262 10:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.520 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.520 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.520 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.520 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.521 { 00:15:41.521 "cntlid": 55, 00:15:41.521 "qid": 0, 00:15:41.521 "state": "enabled", 00:15:41.521 "thread": "nvmf_tgt_poll_group_000", 00:15:41.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.521 "listen_address": { 00:15:41.521 "trtype": "TCP", 00:15:41.521 "adrfam": "IPv4", 00:15:41.521 "traddr": "10.0.0.2", 00:15:41.521 "trsvcid": "4420" 00:15:41.521 }, 00:15:41.521 "peer_address": { 00:15:41.521 "trtype": "TCP", 00:15:41.521 "adrfam": "IPv4", 00:15:41.521 "traddr": "10.0.0.1", 00:15:41.521 "trsvcid": "33800" 00:15:41.521 }, 00:15:41.521 "auth": { 00:15:41.521 "state": "completed", 00:15:41.521 "digest": "sha384", 00:15:41.521 "dhgroup": "null" 00:15:41.521 } 00:15:41.521 } 00:15:41.521 ]' 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.521 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.778 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:41.778 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.344 10:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.602 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.859 00:15:42.859 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.859 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.860 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.117 { 00:15:43.117 "cntlid": 57, 00:15:43.117 "qid": 0, 00:15:43.117 "state": "enabled", 00:15:43.117 "thread": "nvmf_tgt_poll_group_000", 00:15:43.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.117 "listen_address": { 00:15:43.117 "trtype": "TCP", 00:15:43.117 "adrfam": "IPv4", 00:15:43.117 "traddr": "10.0.0.2", 00:15:43.117 "trsvcid": "4420" 00:15:43.117 }, 00:15:43.117 "peer_address": { 00:15:43.117 "trtype": "TCP", 00:15:43.117 "adrfam": "IPv4", 00:15:43.117 "traddr": "10.0.0.1", 00:15:43.117 "trsvcid": "33832" 00:15:43.117 }, 00:15:43.117 "auth": { 00:15:43.117 "state": "completed", 00:15:43.117 "digest": "sha384", 00:15:43.117 "dhgroup": "ffdhe2048" 00:15:43.117 } 00:15:43.117 } 00:15:43.117 ]' 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.117 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.375 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:43.375 10:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.938 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.195 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:44.195 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.195 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.195 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.196 10:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.454 00:15:44.454 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.454 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.454 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.711 { 00:15:44.711 "cntlid": 59, 00:15:44.711 "qid": 0, 00:15:44.711 "state": "enabled", 00:15:44.711 "thread": "nvmf_tgt_poll_group_000", 00:15:44.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.711 "listen_address": { 00:15:44.711 "trtype": "TCP", 00:15:44.711 "adrfam": "IPv4", 00:15:44.711 "traddr": "10.0.0.2", 00:15:44.711 "trsvcid": "4420" 00:15:44.711 }, 00:15:44.711 "peer_address": { 00:15:44.711 "trtype": "TCP", 00:15:44.711 "adrfam": "IPv4", 00:15:44.711 "traddr": "10.0.0.1", 00:15:44.711 "trsvcid": "33848" 00:15:44.711 }, 00:15:44.711 "auth": { 00:15:44.711 "state": "completed", 00:15:44.711 "digest": "sha384", 00:15:44.711 "dhgroup": "ffdhe2048" 00:15:44.711 } 00:15:44.711 } 00:15:44.711 ]' 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.711 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.968 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:44.968 10:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.534 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.793 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.051 00:15:46.051 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.051 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.051 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.309 { 00:15:46.309 "cntlid": 61, 00:15:46.309 "qid": 0, 00:15:46.309 "state": "enabled", 00:15:46.309 "thread": "nvmf_tgt_poll_group_000", 00:15:46.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.309 "listen_address": { 00:15:46.309 "trtype": "TCP", 00:15:46.309 "adrfam": "IPv4", 00:15:46.309 "traddr": "10.0.0.2", 00:15:46.309 "trsvcid": "4420" 00:15:46.309 }, 00:15:46.309 "peer_address": { 00:15:46.309 "trtype": "TCP", 00:15:46.309 "adrfam": "IPv4", 00:15:46.309 "traddr": "10.0.0.1", 00:15:46.309 "trsvcid": "33880" 00:15:46.309 }, 00:15:46.309 "auth": { 00:15:46.309 "state": "completed", 00:15:46.309 "digest": "sha384", 00:15:46.309 "dhgroup": "ffdhe2048" 00:15:46.309 } 00:15:46.309 } 00:15:46.309 ]' 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.309 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.310 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.310 10:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.567 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:46.567 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.133 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.391 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.392 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.392 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.392 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.392 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.392 10:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.650 00:15:47.650 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.650 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.650 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.908 { 00:15:47.908 "cntlid": 63, 00:15:47.908 "qid": 0, 00:15:47.908 "state": "enabled", 00:15:47.908 "thread": "nvmf_tgt_poll_group_000", 00:15:47.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.908 "listen_address": { 00:15:47.908 "trtype": "TCP", 00:15:47.908 "adrfam": "IPv4", 00:15:47.908 "traddr": "10.0.0.2", 00:15:47.908 "trsvcid": "4420" 00:15:47.908 }, 00:15:47.908 "peer_address": { 00:15:47.908 "trtype": "TCP", 00:15:47.908 "adrfam": "IPv4", 00:15:47.908 "traddr": "10.0.0.1", 00:15:47.908 "trsvcid": "33010" 00:15:47.908 }, 00:15:47.908 "auth": { 00:15:47.908 "state": "completed", 00:15:47.908 "digest": "sha384", 00:15:47.908 "dhgroup": "ffdhe2048" 00:15:47.908 } 00:15:47.908 } 00:15:47.908 ]' 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.908 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.166 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:48.166 10:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.733 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.991 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.249 00:15:49.249 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.249 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.249 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.507 { 00:15:49.507 "cntlid": 65, 00:15:49.507 "qid": 0, 00:15:49.507 "state": "enabled", 00:15:49.507 "thread": "nvmf_tgt_poll_group_000", 00:15:49.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:49.507 "listen_address": { 00:15:49.507 "trtype": "TCP", 00:15:49.507 "adrfam": "IPv4", 00:15:49.507 "traddr": "10.0.0.2", 00:15:49.507 "trsvcid": "4420" 00:15:49.507 }, 00:15:49.507 "peer_address": { 00:15:49.507 "trtype": "TCP", 00:15:49.507 "adrfam": "IPv4", 00:15:49.507 "traddr": "10.0.0.1", 00:15:49.507 "trsvcid": "33036" 00:15:49.507 }, 00:15:49.507 "auth": { 00:15:49.507 "state": "completed", 00:15:49.507 "digest": "sha384", 00:15:49.507 "dhgroup": "ffdhe3072" 00:15:49.507 } 00:15:49.507 } 00:15:49.507 ]' 00:15:49.507 10:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.507 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.765 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:49.765 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.332 10:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.590 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.848 00:15:50.848 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.848 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.848 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.107 { 00:15:51.107 "cntlid": 67, 00:15:51.107 "qid": 0, 00:15:51.107 "state": "enabled", 00:15:51.107 "thread": "nvmf_tgt_poll_group_000", 00:15:51.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.107 "listen_address": { 00:15:51.107 "trtype": "TCP", 00:15:51.107 "adrfam": "IPv4", 00:15:51.107 "traddr": "10.0.0.2", 00:15:51.107 "trsvcid": "4420" 00:15:51.107 }, 00:15:51.107 "peer_address": { 00:15:51.107 "trtype": "TCP", 00:15:51.107 "adrfam": "IPv4", 00:15:51.107 "traddr": "10.0.0.1", 00:15:51.107 "trsvcid": "33074" 00:15:51.107 }, 00:15:51.107 "auth": { 00:15:51.107 "state": "completed", 00:15:51.107 "digest": "sha384", 00:15:51.107 "dhgroup": "ffdhe3072" 00:15:51.107 } 00:15:51.107 } 00:15:51.107 ]' 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.107 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.365 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:51.365 10:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:51.959 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.229 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.501 00:15:52.501 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.501 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.501 10:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.780 { 00:15:52.780 "cntlid": 69, 00:15:52.780 "qid": 0, 00:15:52.780 "state": "enabled", 00:15:52.780 "thread": "nvmf_tgt_poll_group_000", 00:15:52.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.780 "listen_address": { 00:15:52.780 "trtype": "TCP", 00:15:52.780 "adrfam": "IPv4", 00:15:52.780 "traddr": "10.0.0.2", 00:15:52.780 "trsvcid": "4420" 00:15:52.780 }, 00:15:52.780 "peer_address": { 00:15:52.780 "trtype": "TCP", 00:15:52.780 "adrfam": "IPv4", 00:15:52.780 "traddr": "10.0.0.1", 00:15:52.780 "trsvcid": "33104" 00:15:52.780 }, 00:15:52.780 "auth": { 00:15:52.780 "state": "completed", 00:15:52.780 "digest": "sha384", 00:15:52.780 "dhgroup": "ffdhe3072" 00:15:52.780 } 00:15:52.780 } 00:15:52.780 ]' 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.780 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.052 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:53.052 10:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.629 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.888 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.888 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.888 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.888 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.888 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.146 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.146 { 00:15:54.146 "cntlid": 71, 00:15:54.146 "qid": 0, 00:15:54.146 "state": "enabled", 00:15:54.146 "thread": "nvmf_tgt_poll_group_000", 00:15:54.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:54.146 "listen_address": { 00:15:54.146 "trtype": "TCP", 00:15:54.146 "adrfam": "IPv4", 00:15:54.146 "traddr": "10.0.0.2", 00:15:54.146 "trsvcid": "4420" 00:15:54.146 }, 00:15:54.146 "peer_address": { 00:15:54.146 "trtype": "TCP", 00:15:54.147 "adrfam": "IPv4", 00:15:54.147 "traddr": "10.0.0.1", 00:15:54.147 "trsvcid": "33134" 00:15:54.147 }, 00:15:54.147 "auth": { 00:15:54.147 "state": "completed", 00:15:54.147 "digest": "sha384", 00:15:54.147 "dhgroup": "ffdhe3072" 00:15:54.147 } 00:15:54.147 } 00:15:54.147 ]' 00:15:54.147 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.404 10:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.663 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:54.663 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.229 10:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.796 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.796 { 00:15:55.796 "cntlid": 73, 00:15:55.796 "qid": 0, 00:15:55.796 "state": "enabled", 00:15:55.796 "thread": "nvmf_tgt_poll_group_000", 00:15:55.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.796 "listen_address": { 00:15:55.796 "trtype": "TCP", 00:15:55.796 "adrfam": "IPv4", 00:15:55.796 "traddr": "10.0.0.2", 00:15:55.796 "trsvcid": "4420" 00:15:55.796 }, 00:15:55.796 "peer_address": { 00:15:55.796 "trtype": "TCP", 00:15:55.796 "adrfam": "IPv4", 00:15:55.796 "traddr": "10.0.0.1", 00:15:55.796 "trsvcid": "33158" 00:15:55.796 }, 00:15:55.796 "auth": { 00:15:55.796 "state": "completed", 00:15:55.796 "digest": "sha384", 00:15:55.796 "dhgroup": "ffdhe4096" 00:15:55.796 } 00:15:55.796 } 00:15:55.796 ]' 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.796 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.055 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.055 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.055 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.055 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:56.055 10:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.621 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.879 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.138 00:15:57.138 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.138 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.138 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.396 { 00:15:57.396 "cntlid": 75, 00:15:57.396 "qid": 0, 00:15:57.396 "state": "enabled", 00:15:57.396 "thread": "nvmf_tgt_poll_group_000", 00:15:57.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.396 "listen_address": { 00:15:57.396 "trtype": "TCP", 00:15:57.396 "adrfam": "IPv4", 00:15:57.396 "traddr": "10.0.0.2", 00:15:57.396 "trsvcid": "4420" 00:15:57.396 }, 00:15:57.396 "peer_address": { 00:15:57.396 "trtype": "TCP", 00:15:57.396 "adrfam": "IPv4", 00:15:57.396 "traddr": "10.0.0.1", 00:15:57.396 "trsvcid": "33196" 00:15:57.396 }, 00:15:57.396 "auth": { 00:15:57.396 "state": "completed", 00:15:57.396 "digest": "sha384", 00:15:57.396 "dhgroup": "ffdhe4096" 00:15:57.396 } 00:15:57.396 } 00:15:57.396 ]' 00:15:57.396 10:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.396 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.396 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:57.655 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:15:58.222 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.480 10:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.480 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.738 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.997 { 00:15:58.997 "cntlid": 77, 00:15:58.997 "qid": 0, 00:15:58.997 "state": "enabled", 00:15:58.997 "thread": "nvmf_tgt_poll_group_000", 00:15:58.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.997 "listen_address": { 00:15:58.997 "trtype": "TCP", 00:15:58.997 "adrfam": "IPv4", 00:15:58.997 "traddr": "10.0.0.2", 00:15:58.997 "trsvcid": "4420" 00:15:58.997 }, 00:15:58.997 "peer_address": { 00:15:58.997 "trtype": "TCP", 00:15:58.997 "adrfam": "IPv4", 00:15:58.997 "traddr": "10.0.0.1", 00:15:58.997 "trsvcid": "39066" 00:15:58.997 }, 00:15:58.997 "auth": { 00:15:58.997 "state": "completed", 00:15:58.997 "digest": "sha384", 00:15:58.997 "dhgroup": "ffdhe4096" 00:15:58.997 } 00:15:58.997 } 00:15:58.997 ]' 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.997 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.256 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.514 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:15:59.514 10:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.080 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.081 10:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.339 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.597 { 00:16:00.597 "cntlid": 79, 00:16:00.597 "qid": 0, 00:16:00.597 "state": "enabled", 00:16:00.597 "thread": "nvmf_tgt_poll_group_000", 00:16:00.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.597 "listen_address": { 00:16:00.597 "trtype": "TCP", 00:16:00.597 "adrfam": "IPv4", 00:16:00.597 "traddr": "10.0.0.2", 00:16:00.597 "trsvcid": "4420" 00:16:00.597 }, 00:16:00.597 "peer_address": { 00:16:00.597 "trtype": "TCP", 00:16:00.597 "adrfam": "IPv4", 00:16:00.597 "traddr": "10.0.0.1", 00:16:00.597 "trsvcid": "39096" 00:16:00.597 }, 00:16:00.597 "auth": { 00:16:00.597 "state": "completed", 00:16:00.597 "digest": "sha384", 00:16:00.597 "dhgroup": "ffdhe4096" 00:16:00.597 } 00:16:00.597 } 00:16:00.597 ]' 00:16:00.597 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.856 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.114 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:01.114 10:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:01.680 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.681 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.247 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.247 { 00:16:02.247 "cntlid": 81, 00:16:02.247 "qid": 0, 00:16:02.247 "state": "enabled", 00:16:02.247 "thread": "nvmf_tgt_poll_group_000", 00:16:02.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.247 "listen_address": { 00:16:02.247 "trtype": "TCP", 00:16:02.247 "adrfam": "IPv4", 00:16:02.247 "traddr": "10.0.0.2", 00:16:02.247 "trsvcid": "4420" 00:16:02.247 }, 00:16:02.247 "peer_address": { 00:16:02.247 "trtype": "TCP", 00:16:02.247 "adrfam": "IPv4", 00:16:02.247 "traddr": "10.0.0.1", 00:16:02.247 "trsvcid": "39132" 00:16:02.247 }, 00:16:02.247 "auth": { 00:16:02.247 "state": "completed", 00:16:02.247 "digest": "sha384", 00:16:02.247 "dhgroup": "ffdhe6144" 00:16:02.247 } 00:16:02.247 } 00:16:02.247 ]' 00:16:02.247 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.505 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.505 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.505 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.505 10:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.505 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.505 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.505 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.762 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:02.762 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.329 10:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.587 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.587 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.587 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.587 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.846 00:16:03.846 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.846 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.846 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.104 { 00:16:04.104 "cntlid": 83, 00:16:04.104 "qid": 0, 00:16:04.104 "state": "enabled", 00:16:04.104 "thread": "nvmf_tgt_poll_group_000", 00:16:04.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.104 "listen_address": { 00:16:04.104 "trtype": "TCP", 00:16:04.104 "adrfam": "IPv4", 00:16:04.104 "traddr": "10.0.0.2", 00:16:04.104 "trsvcid": "4420" 00:16:04.104 }, 00:16:04.104 "peer_address": { 00:16:04.104 "trtype": "TCP", 00:16:04.104 "adrfam": "IPv4", 00:16:04.104 "traddr": "10.0.0.1", 00:16:04.104 "trsvcid": "39156" 00:16:04.104 }, 00:16:04.104 "auth": { 00:16:04.104 "state": "completed", 00:16:04.104 "digest": "sha384", 00:16:04.104 "dhgroup": "ffdhe6144" 00:16:04.104 } 00:16:04.104 } 00:16:04.104 ]' 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.104 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.362 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:04.362 10:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.928 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.186 10:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.443 00:16:05.443 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.443 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.443 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.700 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.700 { 00:16:05.700 "cntlid": 85, 00:16:05.700 "qid": 0, 00:16:05.700 "state": "enabled", 00:16:05.700 "thread": "nvmf_tgt_poll_group_000", 00:16:05.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.700 "listen_address": { 00:16:05.700 "trtype": "TCP", 00:16:05.700 "adrfam": "IPv4", 00:16:05.700 "traddr": "10.0.0.2", 00:16:05.700 "trsvcid": "4420" 00:16:05.700 }, 00:16:05.700 "peer_address": { 00:16:05.700 "trtype": "TCP", 00:16:05.700 "adrfam": "IPv4", 00:16:05.700 "traddr": "10.0.0.1", 00:16:05.700 "trsvcid": "39200" 00:16:05.700 }, 00:16:05.701 "auth": { 00:16:05.701 "state": "completed", 00:16:05.701 "digest": "sha384", 00:16:05.701 "dhgroup": "ffdhe6144" 00:16:05.701 } 00:16:05.701 } 00:16:05.701 ]' 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.701 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.959 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:05.959 10:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.524 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.782 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.040 00:16:07.040 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.040 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.040 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.298 { 00:16:07.298 "cntlid": 87, 00:16:07.298 "qid": 0, 00:16:07.298 "state": "enabled", 00:16:07.298 "thread": "nvmf_tgt_poll_group_000", 00:16:07.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.298 "listen_address": { 00:16:07.298 "trtype": "TCP", 00:16:07.298 "adrfam": "IPv4", 00:16:07.298 "traddr": "10.0.0.2", 00:16:07.298 "trsvcid": "4420" 00:16:07.298 }, 00:16:07.298 "peer_address": { 00:16:07.298 "trtype": "TCP", 00:16:07.298 "adrfam": "IPv4", 00:16:07.298 "traddr": "10.0.0.1", 00:16:07.298 "trsvcid": "39214" 00:16:07.298 }, 00:16:07.298 "auth": { 00:16:07.298 "state": "completed", 00:16:07.298 "digest": "sha384", 00:16:07.298 "dhgroup": "ffdhe6144" 00:16:07.298 } 00:16:07.298 } 00:16:07.298 ]' 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.298 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.557 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:07.557 10:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.557 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.557 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.557 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.557 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:07.557 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.122 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.380 10:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.945 00:16:08.945 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.945 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.945 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.204 { 00:16:09.204 "cntlid": 89, 00:16:09.204 "qid": 0, 00:16:09.204 "state": "enabled", 00:16:09.204 "thread": "nvmf_tgt_poll_group_000", 00:16:09.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.204 "listen_address": { 00:16:09.204 "trtype": "TCP", 00:16:09.204 "adrfam": "IPv4", 00:16:09.204 "traddr": "10.0.0.2", 00:16:09.204 "trsvcid": "4420" 00:16:09.204 }, 00:16:09.204 "peer_address": { 00:16:09.204 "trtype": "TCP", 00:16:09.204 "adrfam": "IPv4", 00:16:09.204 "traddr": "10.0.0.1", 00:16:09.204 "trsvcid": "54030" 00:16:09.204 }, 00:16:09.204 "auth": { 00:16:09.204 "state": "completed", 00:16:09.204 "digest": "sha384", 00:16:09.204 "dhgroup": "ffdhe8192" 00:16:09.204 } 00:16:09.204 } 00:16:09.204 ]' 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.204 10:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.462 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:09.462 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.029 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.287 10:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.853 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.853 { 00:16:10.853 "cntlid": 91, 00:16:10.853 "qid": 0, 00:16:10.853 "state": "enabled", 00:16:10.853 "thread": "nvmf_tgt_poll_group_000", 00:16:10.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.853 "listen_address": { 00:16:10.853 "trtype": "TCP", 00:16:10.853 "adrfam": "IPv4", 00:16:10.853 "traddr": "10.0.0.2", 00:16:10.853 "trsvcid": "4420" 00:16:10.853 }, 00:16:10.853 "peer_address": { 00:16:10.853 "trtype": "TCP", 00:16:10.853 "adrfam": "IPv4", 00:16:10.853 "traddr": "10.0.0.1", 00:16:10.853 "trsvcid": "54068" 00:16:10.853 }, 00:16:10.853 "auth": { 00:16:10.853 "state": "completed", 00:16:10.853 "digest": "sha384", 00:16:10.853 "dhgroup": "ffdhe8192" 00:16:10.853 } 00:16:10.853 } 00:16:10.853 ]' 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.853 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.112 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.112 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.112 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.112 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.112 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.370 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:11.370 10:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.936 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.937 10:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.501 00:16:12.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.501 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.759 { 00:16:12.759 "cntlid": 93, 00:16:12.759 "qid": 0, 00:16:12.759 "state": "enabled", 00:16:12.759 "thread": "nvmf_tgt_poll_group_000", 00:16:12.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.759 "listen_address": { 00:16:12.759 "trtype": "TCP", 00:16:12.759 "adrfam": "IPv4", 00:16:12.759 "traddr": "10.0.0.2", 00:16:12.759 "trsvcid": "4420" 00:16:12.759 }, 00:16:12.759 "peer_address": { 00:16:12.759 "trtype": "TCP", 00:16:12.759 "adrfam": "IPv4", 00:16:12.759 "traddr": "10.0.0.1", 00:16:12.759 "trsvcid": "54098" 00:16:12.759 }, 00:16:12.759 "auth": { 00:16:12.759 "state": "completed", 00:16:12.759 "digest": "sha384", 00:16:12.759 "dhgroup": "ffdhe8192" 00:16:12.759 } 00:16:12.759 } 00:16:12.759 ]' 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.759 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.017 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:13.017 10:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.584 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.842 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.408 00:16:14.408 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.408 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.408 10:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.666 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.666 { 00:16:14.666 "cntlid": 95, 00:16:14.666 "qid": 0, 00:16:14.666 "state": "enabled", 00:16:14.666 "thread": "nvmf_tgt_poll_group_000", 00:16:14.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.666 "listen_address": { 00:16:14.666 "trtype": "TCP", 00:16:14.666 "adrfam": "IPv4", 00:16:14.666 "traddr": "10.0.0.2", 00:16:14.666 "trsvcid": "4420" 00:16:14.666 }, 00:16:14.666 "peer_address": { 00:16:14.666 "trtype": "TCP", 00:16:14.666 "adrfam": "IPv4", 00:16:14.666 "traddr": "10.0.0.1", 00:16:14.666 "trsvcid": "54114" 00:16:14.666 }, 00:16:14.666 "auth": { 00:16:14.666 "state": "completed", 00:16:14.666 "digest": "sha384", 00:16:14.666 "dhgroup": "ffdhe8192" 00:16:14.666 } 00:16:14.667 } 00:16:14.667 ]' 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.667 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.925 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:14.925 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:15.492 10:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.492 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.750 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.008 00:16:16.008 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.008 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.008 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.266 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.266 { 00:16:16.266 "cntlid": 97, 00:16:16.266 "qid": 0, 00:16:16.266 "state": "enabled", 00:16:16.266 "thread": "nvmf_tgt_poll_group_000", 00:16:16.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.267 "listen_address": { 00:16:16.267 "trtype": "TCP", 00:16:16.267 "adrfam": "IPv4", 00:16:16.267 "traddr": "10.0.0.2", 00:16:16.267 "trsvcid": "4420" 00:16:16.267 }, 00:16:16.267 "peer_address": { 00:16:16.267 "trtype": "TCP", 00:16:16.267 "adrfam": "IPv4", 00:16:16.267 "traddr": "10.0.0.1", 00:16:16.267 "trsvcid": "54144" 00:16:16.267 }, 00:16:16.267 "auth": { 00:16:16.267 "state": "completed", 00:16:16.267 "digest": "sha512", 00:16:16.267 "dhgroup": "null" 00:16:16.267 } 00:16:16.267 } 00:16:16.267 ]' 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.267 10:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.525 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:16.525 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.091 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.350 10:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.608 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.608 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.608 { 00:16:17.608 "cntlid": 99, 00:16:17.608 "qid": 0, 00:16:17.608 "state": "enabled", 00:16:17.608 "thread": "nvmf_tgt_poll_group_000", 00:16:17.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.608 "listen_address": { 00:16:17.608 "trtype": "TCP", 00:16:17.608 "adrfam": "IPv4", 00:16:17.608 "traddr": "10.0.0.2", 00:16:17.608 "trsvcid": "4420" 00:16:17.608 }, 00:16:17.608 "peer_address": { 00:16:17.608 "trtype": "TCP", 00:16:17.609 "adrfam": "IPv4", 00:16:17.609 "traddr": "10.0.0.1", 00:16:17.609 "trsvcid": "60430" 00:16:17.609 }, 00:16:17.609 "auth": { 00:16:17.609 "state": "completed", 00:16:17.609 "digest": "sha512", 00:16:17.609 "dhgroup": "null" 00:16:17.609 } 00:16:17.609 } 00:16:17.609 ]' 00:16:17.609 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.866 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.866 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.867 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:17.867 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.867 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.867 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.867 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.124 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:18.125 10:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.691 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.950 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.950 00:16:19.208 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.208 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.208 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.209 { 00:16:19.209 "cntlid": 101, 00:16:19.209 "qid": 0, 00:16:19.209 "state": "enabled", 00:16:19.209 "thread": "nvmf_tgt_poll_group_000", 00:16:19.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.209 "listen_address": { 00:16:19.209 "trtype": "TCP", 00:16:19.209 "adrfam": "IPv4", 00:16:19.209 "traddr": "10.0.0.2", 00:16:19.209 "trsvcid": "4420" 00:16:19.209 }, 00:16:19.209 "peer_address": { 00:16:19.209 "trtype": "TCP", 00:16:19.209 "adrfam": "IPv4", 00:16:19.209 "traddr": "10.0.0.1", 00:16:19.209 "trsvcid": "60448" 00:16:19.209 }, 00:16:19.209 "auth": { 00:16:19.209 "state": "completed", 00:16:19.209 "digest": "sha512", 00:16:19.209 "dhgroup": "null" 00:16:19.209 } 00:16:19.209 } 00:16:19.209 ]' 00:16:19.209 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.466 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.467 10:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.725 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:19.725 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.292 10:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.550 00:16:20.550 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.550 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.550 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.808 { 00:16:20.808 "cntlid": 103, 00:16:20.808 "qid": 0, 00:16:20.808 "state": "enabled", 00:16:20.808 "thread": "nvmf_tgt_poll_group_000", 00:16:20.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.808 "listen_address": { 00:16:20.808 "trtype": "TCP", 00:16:20.808 "adrfam": "IPv4", 00:16:20.808 "traddr": "10.0.0.2", 00:16:20.808 "trsvcid": "4420" 00:16:20.808 }, 00:16:20.808 "peer_address": { 00:16:20.808 "trtype": "TCP", 00:16:20.808 "adrfam": "IPv4", 00:16:20.808 "traddr": "10.0.0.1", 00:16:20.808 "trsvcid": "60478" 00:16:20.808 }, 00:16:20.808 "auth": { 00:16:20.808 "state": "completed", 00:16:20.808 "digest": "sha512", 00:16:20.808 "dhgroup": "null" 00:16:20.808 } 00:16:20.808 } 00:16:20.808 ]' 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.808 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:21.066 10:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.670 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.944 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.201 00:16:22.201 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.201 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.201 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.460 { 00:16:22.460 "cntlid": 105, 00:16:22.460 "qid": 0, 00:16:22.460 "state": "enabled", 00:16:22.460 "thread": "nvmf_tgt_poll_group_000", 00:16:22.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.460 "listen_address": { 00:16:22.460 "trtype": "TCP", 00:16:22.460 "adrfam": "IPv4", 00:16:22.460 "traddr": "10.0.0.2", 00:16:22.460 "trsvcid": "4420" 00:16:22.460 }, 00:16:22.460 "peer_address": { 00:16:22.460 "trtype": "TCP", 00:16:22.460 "adrfam": "IPv4", 00:16:22.460 "traddr": "10.0.0.1", 00:16:22.460 "trsvcid": "60512" 00:16:22.460 }, 00:16:22.460 "auth": { 00:16:22.460 "state": "completed", 00:16:22.460 "digest": "sha512", 00:16:22.460 "dhgroup": "ffdhe2048" 00:16:22.460 } 00:16:22.460 } 00:16:22.460 ]' 00:16:22.460 10:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.460 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.718 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:22.718 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.285 10:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.543 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.544 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.544 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.544 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.544 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.802 00:16:23.802 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.802 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.802 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.060 { 00:16:24.060 "cntlid": 107, 00:16:24.060 "qid": 0, 00:16:24.060 "state": "enabled", 00:16:24.060 "thread": "nvmf_tgt_poll_group_000", 00:16:24.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.060 "listen_address": { 00:16:24.060 "trtype": "TCP", 00:16:24.060 "adrfam": "IPv4", 00:16:24.060 "traddr": "10.0.0.2", 00:16:24.060 "trsvcid": "4420" 00:16:24.060 }, 00:16:24.060 "peer_address": { 00:16:24.060 "trtype": "TCP", 00:16:24.060 "adrfam": "IPv4", 00:16:24.060 "traddr": "10.0.0.1", 00:16:24.060 "trsvcid": "60550" 00:16:24.060 }, 00:16:24.060 "auth": { 00:16:24.060 "state": "completed", 00:16:24.060 "digest": "sha512", 00:16:24.060 "dhgroup": "ffdhe2048" 00:16:24.060 } 00:16:24.060 } 00:16:24.060 ]' 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.060 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.318 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:24.318 10:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.885 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.143 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.144 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.144 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.144 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.144 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.144 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.402 00:16:25.402 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.402 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.402 10:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.660 { 00:16:25.660 "cntlid": 109, 00:16:25.660 "qid": 0, 00:16:25.660 "state": "enabled", 00:16:25.660 "thread": "nvmf_tgt_poll_group_000", 00:16:25.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.660 "listen_address": { 00:16:25.660 "trtype": "TCP", 00:16:25.660 "adrfam": "IPv4", 00:16:25.660 "traddr": "10.0.0.2", 00:16:25.660 "trsvcid": "4420" 00:16:25.660 }, 00:16:25.660 "peer_address": { 00:16:25.660 "trtype": "TCP", 00:16:25.660 "adrfam": "IPv4", 00:16:25.660 "traddr": "10.0.0.1", 00:16:25.660 "trsvcid": "60574" 00:16:25.660 }, 00:16:25.660 "auth": { 00:16:25.660 "state": "completed", 00:16:25.660 "digest": "sha512", 00:16:25.660 "dhgroup": "ffdhe2048" 00:16:25.660 } 00:16:25.660 } 00:16:25.660 ]' 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.660 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.919 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:25.919 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:26.485 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.485 10:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.485 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.744 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.003 00:16:27.003 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.003 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.003 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.261 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.261 { 00:16:27.261 "cntlid": 111, 00:16:27.261 "qid": 0, 00:16:27.262 "state": "enabled", 00:16:27.262 "thread": "nvmf_tgt_poll_group_000", 00:16:27.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.262 "listen_address": { 00:16:27.262 "trtype": "TCP", 00:16:27.262 "adrfam": "IPv4", 00:16:27.262 "traddr": "10.0.0.2", 00:16:27.262 "trsvcid": "4420" 00:16:27.262 }, 00:16:27.262 "peer_address": { 00:16:27.262 "trtype": "TCP", 00:16:27.262 "adrfam": "IPv4", 00:16:27.262 "traddr": "10.0.0.1", 00:16:27.262 "trsvcid": "60596" 00:16:27.262 }, 00:16:27.262 "auth": { 00:16:27.262 "state": "completed", 00:16:27.262 "digest": "sha512", 00:16:27.262 "dhgroup": "ffdhe2048" 00:16:27.262 } 00:16:27.262 } 00:16:27.262 ]' 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.262 10:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.520 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:27.520 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.087 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.346 10:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.604 00:16:28.604 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.604 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.604 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.862 { 00:16:28.862 "cntlid": 113, 00:16:28.862 "qid": 0, 00:16:28.862 "state": "enabled", 00:16:28.862 "thread": "nvmf_tgt_poll_group_000", 00:16:28.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.862 "listen_address": { 00:16:28.862 "trtype": "TCP", 00:16:28.862 "adrfam": "IPv4", 00:16:28.862 "traddr": "10.0.0.2", 00:16:28.862 "trsvcid": "4420" 00:16:28.862 }, 00:16:28.862 "peer_address": { 00:16:28.862 "trtype": "TCP", 00:16:28.862 "adrfam": "IPv4", 00:16:28.862 "traddr": "10.0.0.1", 00:16:28.862 "trsvcid": "60186" 00:16:28.862 }, 00:16:28.862 "auth": { 00:16:28.862 "state": "completed", 00:16:28.862 "digest": "sha512", 00:16:28.862 "dhgroup": "ffdhe3072" 00:16:28.862 } 00:16:28.862 } 00:16:28.862 ]' 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.862 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.120 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:29.120 10:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.692 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.693 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.693 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.693 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.693 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.950 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.209 { 00:16:30.209 "cntlid": 115, 00:16:30.209 "qid": 0, 00:16:30.209 "state": "enabled", 00:16:30.209 "thread": "nvmf_tgt_poll_group_000", 00:16:30.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.209 "listen_address": { 00:16:30.209 "trtype": "TCP", 00:16:30.209 "adrfam": "IPv4", 00:16:30.209 "traddr": "10.0.0.2", 00:16:30.209 "trsvcid": "4420" 00:16:30.209 }, 00:16:30.209 "peer_address": { 00:16:30.209 "trtype": "TCP", 00:16:30.209 "adrfam": "IPv4", 00:16:30.209 "traddr": "10.0.0.1", 00:16:30.209 "trsvcid": "60234" 00:16:30.209 }, 00:16:30.209 "auth": { 00:16:30.209 "state": "completed", 00:16:30.209 "digest": "sha512", 00:16:30.209 "dhgroup": "ffdhe3072" 00:16:30.209 } 00:16:30.209 } 00:16:30.209 ]' 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.209 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.467 10:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.725 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:30.725 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.291 10:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.549 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.807 { 00:16:31.807 "cntlid": 117, 00:16:31.807 "qid": 0, 00:16:31.807 "state": "enabled", 00:16:31.807 "thread": "nvmf_tgt_poll_group_000", 00:16:31.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.807 "listen_address": { 00:16:31.807 "trtype": "TCP", 00:16:31.807 "adrfam": "IPv4", 00:16:31.807 "traddr": "10.0.0.2", 00:16:31.807 "trsvcid": "4420" 00:16:31.807 }, 00:16:31.807 "peer_address": { 00:16:31.807 "trtype": "TCP", 00:16:31.807 "adrfam": "IPv4", 00:16:31.807 "traddr": "10.0.0.1", 00:16:31.807 "trsvcid": "60274" 00:16:31.807 }, 00:16:31.807 "auth": { 00:16:31.807 "state": "completed", 00:16:31.807 "digest": "sha512", 00:16:31.807 "dhgroup": "ffdhe3072" 00:16:31.807 } 00:16:31.807 } 00:16:31.807 ]' 00:16:31.807 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.070 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.330 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:32.330 10:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.894 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.152 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.152 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.152 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.152 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.410 00:16:33.410 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.410 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.410 10:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.410 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.410 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.411 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.411 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.411 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.411 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.411 { 00:16:33.411 "cntlid": 119, 00:16:33.411 "qid": 0, 00:16:33.411 "state": "enabled", 00:16:33.411 "thread": "nvmf_tgt_poll_group_000", 00:16:33.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.411 "listen_address": { 00:16:33.411 "trtype": "TCP", 00:16:33.411 "adrfam": "IPv4", 00:16:33.411 "traddr": "10.0.0.2", 00:16:33.411 "trsvcid": "4420" 00:16:33.411 }, 00:16:33.411 "peer_address": { 00:16:33.411 "trtype": "TCP", 00:16:33.411 "adrfam": "IPv4", 00:16:33.411 "traddr": "10.0.0.1", 00:16:33.411 "trsvcid": "60306" 00:16:33.411 }, 00:16:33.411 "auth": { 00:16:33.411 "state": "completed", 00:16:33.411 "digest": "sha512", 00:16:33.411 "dhgroup": "ffdhe3072" 00:16:33.411 } 00:16:33.411 } 00:16:33.411 ]' 00:16:33.411 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.668 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.669 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.927 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:33.927 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.493 10:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.751 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.009 00:16:35.009 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.009 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.009 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.267 { 00:16:35.267 "cntlid": 121, 00:16:35.267 "qid": 0, 00:16:35.267 "state": "enabled", 00:16:35.267 "thread": "nvmf_tgt_poll_group_000", 00:16:35.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.267 "listen_address": { 00:16:35.267 "trtype": "TCP", 00:16:35.267 "adrfam": "IPv4", 00:16:35.267 "traddr": "10.0.0.2", 00:16:35.267 "trsvcid": "4420" 00:16:35.267 }, 00:16:35.267 "peer_address": { 00:16:35.267 "trtype": "TCP", 00:16:35.267 "adrfam": "IPv4", 00:16:35.267 "traddr": "10.0.0.1", 00:16:35.267 "trsvcid": "60328" 00:16:35.267 }, 00:16:35.267 "auth": { 00:16:35.267 "state": "completed", 00:16:35.267 "digest": "sha512", 00:16:35.267 "dhgroup": "ffdhe4096" 00:16:35.267 } 00:16:35.267 } 00:16:35.267 ]' 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.267 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.525 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:35.525 10:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.091 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.348 10:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.606 00:16:36.606 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.606 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.606 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.864 { 00:16:36.864 "cntlid": 123, 00:16:36.864 "qid": 0, 00:16:36.864 "state": "enabled", 00:16:36.864 "thread": "nvmf_tgt_poll_group_000", 00:16:36.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.864 "listen_address": { 00:16:36.864 "trtype": "TCP", 00:16:36.864 "adrfam": "IPv4", 00:16:36.864 "traddr": "10.0.0.2", 00:16:36.864 "trsvcid": "4420" 00:16:36.864 }, 00:16:36.864 "peer_address": { 00:16:36.864 "trtype": "TCP", 00:16:36.864 "adrfam": "IPv4", 00:16:36.864 "traddr": "10.0.0.1", 00:16:36.864 "trsvcid": "60344" 00:16:36.864 }, 00:16:36.864 "auth": { 00:16:36.864 "state": "completed", 00:16:36.864 "digest": "sha512", 00:16:36.864 "dhgroup": "ffdhe4096" 00:16:36.864 } 00:16:36.864 } 00:16:36.864 ]' 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.864 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.123 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:37.123 10:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.688 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.946 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.204 00:16:38.204 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.204 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.204 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.462 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.462 { 00:16:38.462 "cntlid": 125, 00:16:38.462 "qid": 0, 00:16:38.462 "state": "enabled", 00:16:38.462 "thread": "nvmf_tgt_poll_group_000", 00:16:38.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.462 "listen_address": { 00:16:38.462 "trtype": "TCP", 00:16:38.462 "adrfam": "IPv4", 00:16:38.462 "traddr": "10.0.0.2", 00:16:38.462 "trsvcid": "4420" 00:16:38.462 }, 00:16:38.462 "peer_address": { 00:16:38.462 "trtype": "TCP", 00:16:38.462 "adrfam": "IPv4", 00:16:38.463 "traddr": "10.0.0.1", 00:16:38.463 "trsvcid": "46976" 00:16:38.463 }, 00:16:38.463 "auth": { 00:16:38.463 "state": "completed", 00:16:38.463 "digest": "sha512", 00:16:38.463 "dhgroup": "ffdhe4096" 00:16:38.463 } 00:16:38.463 } 00:16:38.463 ]' 00:16:38.463 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.463 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.463 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.463 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.463 10:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.463 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.463 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.463 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.720 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:38.720 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.286 10:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.544 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.802 00:16:39.802 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.802 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.802 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.060 { 00:16:40.060 "cntlid": 127, 00:16:40.060 "qid": 0, 00:16:40.060 "state": "enabled", 00:16:40.060 "thread": "nvmf_tgt_poll_group_000", 00:16:40.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.060 "listen_address": { 00:16:40.060 "trtype": "TCP", 00:16:40.060 "adrfam": "IPv4", 00:16:40.060 "traddr": "10.0.0.2", 00:16:40.060 "trsvcid": "4420" 00:16:40.060 }, 00:16:40.060 "peer_address": { 00:16:40.060 "trtype": "TCP", 00:16:40.060 "adrfam": "IPv4", 00:16:40.060 "traddr": "10.0.0.1", 00:16:40.060 "trsvcid": "47016" 00:16:40.060 }, 00:16:40.060 "auth": { 00:16:40.060 "state": "completed", 00:16:40.060 "digest": "sha512", 00:16:40.060 "dhgroup": "ffdhe4096" 00:16:40.060 } 00:16:40.060 } 00:16:40.060 ]' 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.060 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.318 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:40.319 10:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.884 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.142 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:41.142 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.143 10:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.401 00:16:41.401 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.401 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.401 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.659 { 00:16:41.659 "cntlid": 129, 00:16:41.659 "qid": 0, 00:16:41.659 "state": "enabled", 00:16:41.659 "thread": "nvmf_tgt_poll_group_000", 00:16:41.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.659 "listen_address": { 00:16:41.659 "trtype": "TCP", 00:16:41.659 "adrfam": "IPv4", 00:16:41.659 "traddr": "10.0.0.2", 00:16:41.659 "trsvcid": "4420" 00:16:41.659 }, 00:16:41.659 "peer_address": { 00:16:41.659 "trtype": "TCP", 00:16:41.659 "adrfam": "IPv4", 00:16:41.659 "traddr": "10.0.0.1", 00:16:41.659 "trsvcid": "47056" 00:16:41.659 }, 00:16:41.659 "auth": { 00:16:41.659 "state": "completed", 00:16:41.659 "digest": "sha512", 00:16:41.659 "dhgroup": "ffdhe6144" 00:16:41.659 } 00:16:41.659 } 00:16:41.659 ]' 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.659 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.917 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.917 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.917 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:41.917 10:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.485 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.744 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.309 00:16:43.309 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.309 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.309 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.310 { 00:16:43.310 "cntlid": 131, 00:16:43.310 "qid": 0, 00:16:43.310 "state": "enabled", 00:16:43.310 "thread": "nvmf_tgt_poll_group_000", 00:16:43.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.310 "listen_address": { 00:16:43.310 "trtype": "TCP", 00:16:43.310 "adrfam": "IPv4", 00:16:43.310 "traddr": "10.0.0.2", 00:16:43.310 "trsvcid": "4420" 00:16:43.310 }, 00:16:43.310 "peer_address": { 00:16:43.310 "trtype": "TCP", 00:16:43.310 "adrfam": "IPv4", 00:16:43.310 "traddr": "10.0.0.1", 00:16:43.310 "trsvcid": "47080" 00:16:43.310 }, 00:16:43.310 "auth": { 00:16:43.310 "state": "completed", 00:16:43.310 "digest": "sha512", 00:16:43.310 "dhgroup": "ffdhe6144" 00:16:43.310 } 00:16:43.310 } 00:16:43.310 ]' 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.310 10:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:43.568 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.502 10:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.502 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.760 00:16:44.760 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.760 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.760 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.018 { 00:16:45.018 "cntlid": 133, 00:16:45.018 "qid": 0, 00:16:45.018 "state": "enabled", 00:16:45.018 "thread": "nvmf_tgt_poll_group_000", 00:16:45.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.018 "listen_address": { 00:16:45.018 "trtype": "TCP", 00:16:45.018 "adrfam": "IPv4", 00:16:45.018 "traddr": "10.0.0.2", 00:16:45.018 "trsvcid": "4420" 00:16:45.018 }, 00:16:45.018 "peer_address": { 00:16:45.018 "trtype": "TCP", 00:16:45.018 "adrfam": "IPv4", 00:16:45.018 "traddr": "10.0.0.1", 00:16:45.018 "trsvcid": "47098" 00:16:45.018 }, 00:16:45.018 "auth": { 00:16:45.018 "state": "completed", 00:16:45.018 "digest": "sha512", 00:16:45.018 "dhgroup": "ffdhe6144" 00:16:45.018 } 00:16:45.018 } 00:16:45.018 ]' 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.018 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.276 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.276 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.276 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.276 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.276 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.534 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:45.534 10:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.100 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.101 10:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.667 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.667 { 00:16:46.667 "cntlid": 135, 00:16:46.667 "qid": 0, 00:16:46.667 "state": "enabled", 00:16:46.667 "thread": "nvmf_tgt_poll_group_000", 00:16:46.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.667 "listen_address": { 00:16:46.667 "trtype": "TCP", 00:16:46.667 "adrfam": "IPv4", 00:16:46.667 "traddr": "10.0.0.2", 00:16:46.667 "trsvcid": "4420" 00:16:46.667 }, 00:16:46.667 "peer_address": { 00:16:46.667 "trtype": "TCP", 00:16:46.667 "adrfam": "IPv4", 00:16:46.667 "traddr": "10.0.0.1", 00:16:46.667 "trsvcid": "47124" 00:16:46.667 }, 00:16:46.667 "auth": { 00:16:46.667 "state": "completed", 00:16:46.667 "digest": "sha512", 00:16:46.667 "dhgroup": "ffdhe6144" 00:16:46.667 } 00:16:46.667 } 00:16:46.667 ]' 00:16:46.667 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.925 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.183 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:47.183 10:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.748 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.749 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.007 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.007 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.007 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.007 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.264 00:16:48.264 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.264 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.264 10:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.522 { 00:16:48.522 "cntlid": 137, 00:16:48.522 "qid": 0, 00:16:48.522 "state": "enabled", 00:16:48.522 "thread": "nvmf_tgt_poll_group_000", 00:16:48.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.522 "listen_address": { 00:16:48.522 "trtype": "TCP", 00:16:48.522 "adrfam": "IPv4", 00:16:48.522 "traddr": "10.0.0.2", 00:16:48.522 "trsvcid": "4420" 00:16:48.522 }, 00:16:48.522 "peer_address": { 00:16:48.522 "trtype": "TCP", 00:16:48.522 "adrfam": "IPv4", 00:16:48.522 "traddr": "10.0.0.1", 00:16:48.522 "trsvcid": "48916" 00:16:48.522 }, 00:16:48.522 "auth": { 00:16:48.522 "state": "completed", 00:16:48.522 "digest": "sha512", 00:16:48.522 "dhgroup": "ffdhe8192" 00:16:48.522 } 00:16:48.522 } 00:16:48.522 ]' 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.522 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.780 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.038 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:49.038 10:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.603 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.169 00:16:50.169 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.169 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.169 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.427 { 00:16:50.427 "cntlid": 139, 00:16:50.427 "qid": 0, 00:16:50.427 "state": "enabled", 00:16:50.427 "thread": "nvmf_tgt_poll_group_000", 00:16:50.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:50.427 "listen_address": { 00:16:50.427 "trtype": "TCP", 00:16:50.427 "adrfam": "IPv4", 00:16:50.427 "traddr": "10.0.0.2", 00:16:50.427 "trsvcid": "4420" 00:16:50.427 }, 00:16:50.427 "peer_address": { 00:16:50.427 "trtype": "TCP", 00:16:50.427 "adrfam": "IPv4", 00:16:50.427 "traddr": "10.0.0.1", 00:16:50.427 "trsvcid": "48940" 00:16:50.427 }, 00:16:50.427 "auth": { 00:16:50.427 "state": "completed", 00:16:50.427 "digest": "sha512", 00:16:50.427 "dhgroup": "ffdhe8192" 00:16:50.427 } 00:16:50.427 } 00:16:50.427 ]' 00:16:50.427 10:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.427 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.685 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:50.685 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: --dhchap-ctrl-secret DHHC-1:02:NmU1MTk4M2IxYmQ0MGZhYzZhMzRiZTdhYWU0MzA0ZmQyZWIzMWI0YTEyOGYyOTc2y/vJoQ==: 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.250 10:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.509 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.074 00:16:52.074 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.074 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.074 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.333 { 00:16:52.333 "cntlid": 141, 00:16:52.333 "qid": 0, 00:16:52.333 "state": "enabled", 00:16:52.333 "thread": "nvmf_tgt_poll_group_000", 00:16:52.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.333 "listen_address": { 00:16:52.333 "trtype": "TCP", 00:16:52.333 "adrfam": "IPv4", 00:16:52.333 "traddr": "10.0.0.2", 00:16:52.333 "trsvcid": "4420" 00:16:52.333 }, 00:16:52.333 "peer_address": { 00:16:52.333 "trtype": "TCP", 00:16:52.333 "adrfam": "IPv4", 00:16:52.333 "traddr": "10.0.0.1", 00:16:52.333 "trsvcid": "48964" 00:16:52.333 }, 00:16:52.333 "auth": { 00:16:52.333 "state": "completed", 00:16:52.333 "digest": "sha512", 00:16:52.333 "dhgroup": "ffdhe8192" 00:16:52.333 } 00:16:52.333 } 00:16:52.333 ]' 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.333 10:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.591 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:52.591 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:01:ZWY4ZDgwYjJlOTg3ZDM2Yjc4OTQxZWY3NGMyZmM2ZDGbPVui: 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.156 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.414 10:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.980 00:16:53.980 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.980 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.980 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.239 { 00:16:54.239 "cntlid": 143, 00:16:54.239 "qid": 0, 00:16:54.239 "state": "enabled", 00:16:54.239 "thread": "nvmf_tgt_poll_group_000", 00:16:54.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.239 "listen_address": { 00:16:54.239 "trtype": "TCP", 00:16:54.239 "adrfam": "IPv4", 00:16:54.239 "traddr": "10.0.0.2", 00:16:54.239 "trsvcid": "4420" 00:16:54.239 }, 00:16:54.239 "peer_address": { 00:16:54.239 "trtype": "TCP", 00:16:54.239 "adrfam": "IPv4", 00:16:54.239 "traddr": "10.0.0.1", 00:16:54.239 "trsvcid": "49002" 00:16:54.239 }, 00:16:54.239 "auth": { 00:16:54.239 "state": "completed", 00:16:54.239 "digest": "sha512", 00:16:54.239 "dhgroup": "ffdhe8192" 00:16:54.239 } 00:16:54.239 } 00:16:54.239 ]' 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.239 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.497 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:54.497 10:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.063 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.322 10:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.888 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.888 { 00:16:55.888 "cntlid": 145, 00:16:55.888 "qid": 0, 00:16:55.888 "state": "enabled", 00:16:55.888 "thread": "nvmf_tgt_poll_group_000", 00:16:55.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.888 "listen_address": { 00:16:55.888 "trtype": "TCP", 00:16:55.888 "adrfam": "IPv4", 00:16:55.888 "traddr": "10.0.0.2", 00:16:55.888 "trsvcid": "4420" 00:16:55.888 }, 00:16:55.888 "peer_address": { 00:16:55.888 "trtype": "TCP", 00:16:55.888 "adrfam": "IPv4", 00:16:55.888 "traddr": "10.0.0.1", 00:16:55.888 "trsvcid": "49024" 00:16:55.888 }, 00:16:55.888 "auth": { 00:16:55.888 "state": "completed", 00:16:55.888 "digest": "sha512", 00:16:55.888 "dhgroup": "ffdhe8192" 00:16:55.888 } 00:16:55.888 } 00:16:55.888 ]' 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.888 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.146 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.146 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.146 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.146 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.146 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.405 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:56.405 10:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NzFlYzQ2ZmZlZGUxYzJjZjk3NmUzYzY5NDI2MTJhODgyZGQyNWRlMDMyNWEwZThjVMKGTQ==: --dhchap-ctrl-secret DHHC-1:03:ZGY0MTE1YTc2ZmY4YmZiMjkzMjczMjEwMjk5NjMyN2QzYmUxNzEwZTE0YTdmMTlhNmIyZDkzMjI2OGNhYTBmML/Z8s8=: 00:16:56.971 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:56.972 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:57.230 request: 00:16:57.230 { 00:16:57.230 "name": "nvme0", 00:16:57.230 "trtype": "tcp", 00:16:57.230 "traddr": "10.0.0.2", 00:16:57.230 "adrfam": "ipv4", 00:16:57.230 "trsvcid": "4420", 00:16:57.230 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.230 "prchk_reftag": false, 00:16:57.230 "prchk_guard": false, 00:16:57.230 "hdgst": false, 00:16:57.230 "ddgst": false, 00:16:57.230 "dhchap_key": "key2", 00:16:57.230 "allow_unrecognized_csi": false, 00:16:57.230 "method": "bdev_nvme_attach_controller", 00:16:57.230 "req_id": 1 00:16:57.230 } 00:16:57.230 Got JSON-RPC error response 00:16:57.230 response: 00:16:57.230 { 00:16:57.230 "code": -5, 00:16:57.230 "message": "Input/output error" 00:16:57.230 } 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.230 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.488 10:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:57.746 request: 00:16:57.746 { 00:16:57.746 "name": "nvme0", 00:16:57.746 "trtype": "tcp", 00:16:57.746 "traddr": "10.0.0.2", 00:16:57.746 "adrfam": "ipv4", 00:16:57.746 "trsvcid": "4420", 00:16:57.746 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.746 "prchk_reftag": false, 00:16:57.746 "prchk_guard": false, 00:16:57.746 "hdgst": false, 00:16:57.746 "ddgst": false, 00:16:57.746 "dhchap_key": "key1", 00:16:57.747 "dhchap_ctrlr_key": "ckey2", 00:16:57.747 "allow_unrecognized_csi": false, 00:16:57.747 "method": "bdev_nvme_attach_controller", 00:16:57.747 "req_id": 1 00:16:57.747 } 00:16:57.747 Got JSON-RPC error response 00:16:57.747 response: 00:16:57.747 { 00:16:57.747 "code": -5, 00:16:57.747 "message": "Input/output error" 00:16:57.747 } 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.747 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.314 request: 00:16:58.314 { 00:16:58.314 "name": "nvme0", 00:16:58.314 "trtype": "tcp", 00:16:58.314 "traddr": "10.0.0.2", 00:16:58.314 "adrfam": "ipv4", 00:16:58.314 "trsvcid": "4420", 00:16:58.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:58.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.314 "prchk_reftag": false, 00:16:58.314 "prchk_guard": false, 00:16:58.314 "hdgst": false, 00:16:58.314 "ddgst": false, 00:16:58.314 "dhchap_key": "key1", 00:16:58.314 "dhchap_ctrlr_key": "ckey1", 00:16:58.314 "allow_unrecognized_csi": false, 00:16:58.314 "method": "bdev_nvme_attach_controller", 00:16:58.314 "req_id": 1 00:16:58.314 } 00:16:58.314 Got JSON-RPC error response 00:16:58.314 response: 00:16:58.314 { 00:16:58.314 "code": -5, 00:16:58.314 "message": "Input/output error" 00:16:58.314 } 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2664117 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2664117 ']' 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2664117 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2664117 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2664117' 00:16:58.314 killing process with pid 2664117 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2664117 00:16:58.314 10:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2664117 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2686234 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2686234 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2686234 ']' 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.573 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2686234 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2686234 ']' 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.831 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 null0 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YdF 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.lq4 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lq4 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.SPK 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cXc ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cXc 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5O4 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.090 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.sx6 ]] 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sx6 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Q7E 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.091 10:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.025 nvme0n1 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.025 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.025 { 00:17:00.025 "cntlid": 1, 00:17:00.025 "qid": 0, 00:17:00.025 "state": "enabled", 00:17:00.025 "thread": "nvmf_tgt_poll_group_000", 00:17:00.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.025 "listen_address": { 00:17:00.025 "trtype": "TCP", 00:17:00.025 "adrfam": "IPv4", 00:17:00.025 "traddr": "10.0.0.2", 00:17:00.025 "trsvcid": "4420" 00:17:00.025 }, 00:17:00.025 "peer_address": { 00:17:00.025 "trtype": "TCP", 00:17:00.025 "adrfam": "IPv4", 00:17:00.025 "traddr": "10.0.0.1", 00:17:00.025 "trsvcid": "51676" 00:17:00.025 }, 00:17:00.025 "auth": { 00:17:00.025 "state": "completed", 00:17:00.025 "digest": "sha512", 00:17:00.025 "dhgroup": "ffdhe8192" 00:17:00.025 } 00:17:00.026 } 00:17:00.026 ]' 00:17:00.026 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.026 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.026 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.283 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.283 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.283 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.284 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.284 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.541 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:17:00.541 10:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:17:01.107 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.108 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.366 request: 00:17:01.366 { 00:17:01.366 "name": "nvme0", 00:17:01.366 "trtype": "tcp", 00:17:01.366 "traddr": "10.0.0.2", 00:17:01.366 "adrfam": "ipv4", 00:17:01.366 "trsvcid": "4420", 00:17:01.366 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.366 "prchk_reftag": false, 00:17:01.366 "prchk_guard": false, 00:17:01.366 "hdgst": false, 00:17:01.366 "ddgst": false, 00:17:01.366 "dhchap_key": "key3", 00:17:01.366 "allow_unrecognized_csi": false, 00:17:01.366 "method": "bdev_nvme_attach_controller", 00:17:01.366 "req_id": 1 00:17:01.366 } 00:17:01.366 Got JSON-RPC error response 00:17:01.366 response: 00:17:01.366 { 00:17:01.366 "code": -5, 00:17:01.366 "message": "Input/output error" 00:17:01.366 } 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:01.366 10:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:01.624 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:01.624 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:01.624 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:01.624 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.625 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.884 request: 00:17:01.884 { 00:17:01.884 "name": "nvme0", 00:17:01.884 "trtype": "tcp", 00:17:01.884 "traddr": "10.0.0.2", 00:17:01.884 "adrfam": "ipv4", 00:17:01.884 "trsvcid": "4420", 00:17:01.884 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.884 "prchk_reftag": false, 00:17:01.884 "prchk_guard": false, 00:17:01.884 "hdgst": false, 00:17:01.884 "ddgst": false, 00:17:01.884 "dhchap_key": "key3", 00:17:01.884 "allow_unrecognized_csi": false, 00:17:01.884 "method": "bdev_nvme_attach_controller", 00:17:01.884 "req_id": 1 00:17:01.884 } 00:17:01.884 Got JSON-RPC error response 00:17:01.884 response: 00:17:01.884 { 00:17:01.884 "code": -5, 00:17:01.884 "message": "Input/output error" 00:17:01.884 } 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.884 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.143 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:02.457 request: 00:17:02.457 { 00:17:02.457 "name": "nvme0", 00:17:02.457 "trtype": "tcp", 00:17:02.457 "traddr": "10.0.0.2", 00:17:02.457 "adrfam": "ipv4", 00:17:02.457 "trsvcid": "4420", 00:17:02.457 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.457 "prchk_reftag": false, 00:17:02.457 "prchk_guard": false, 00:17:02.457 "hdgst": false, 00:17:02.457 "ddgst": false, 00:17:02.457 "dhchap_key": "key0", 00:17:02.457 "dhchap_ctrlr_key": "key1", 00:17:02.457 "allow_unrecognized_csi": false, 00:17:02.457 "method": "bdev_nvme_attach_controller", 00:17:02.457 "req_id": 1 00:17:02.457 } 00:17:02.457 Got JSON-RPC error response 00:17:02.457 response: 00:17:02.457 { 00:17:02.457 "code": -5, 00:17:02.457 "message": "Input/output error" 00:17:02.457 } 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:02.457 10:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:02.736 nvme0n1 00:17:02.736 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:02.736 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:02.736 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.015 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.015 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.015 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.016 10:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:03.996 nvme0n1 00:17:03.996 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:03.996 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:03.996 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.996 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:03.997 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.254 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.254 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:17:04.254 10:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: --dhchap-ctrl-secret DHHC-1:03:Zjk5MTc4ZGFmMjg2ZDk0MTkzMzY5ZmI5YmM2NmUxYWQ2YTViNzg4YmIzYzFkYzViZmNkZDMyZGE3MjFkOWMzNNWCWb8=: 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.820 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.078 10:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.336 request: 00:17:05.336 { 00:17:05.336 "name": "nvme0", 00:17:05.336 "trtype": "tcp", 00:17:05.336 "traddr": "10.0.0.2", 00:17:05.336 "adrfam": "ipv4", 00:17:05.336 "trsvcid": "4420", 00:17:05.336 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.336 "prchk_reftag": false, 00:17:05.336 "prchk_guard": false, 00:17:05.336 "hdgst": false, 00:17:05.336 "ddgst": false, 00:17:05.336 "dhchap_key": "key1", 00:17:05.336 "allow_unrecognized_csi": false, 00:17:05.336 "method": "bdev_nvme_attach_controller", 00:17:05.336 "req_id": 1 00:17:05.336 } 00:17:05.336 Got JSON-RPC error response 00:17:05.336 response: 00:17:05.336 { 00:17:05.336 "code": -5, 00:17:05.336 "message": "Input/output error" 00:17:05.337 } 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.594 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.161 nvme0n1 00:17:06.161 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:06.161 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:06.161 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.419 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.419 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.419 10:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:06.677 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:06.935 nvme0n1 00:17:06.935 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:06.935 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:06.935 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: '' 2s 00:17:07.191 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:07.448 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: ]] 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:M2E5MGY1MzRjYzY2NWExYmVjZjUwMTFkNTRiMGU5MjHU5Ih2: 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:07.449 10:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: 2s 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: ]] 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTQ5YWUzMTE0MTFhYjI3MWFjNjBlYjRkODViM2ExYjAwOWEwYjI1MjE2YWNmMDUxoDT0DQ==: 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:09.347 10:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:11.246 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:11.246 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:11.246 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:11.246 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.503 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.504 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.504 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.504 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:11.504 10:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:12.069 nvme0n1 00:17:12.069 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.069 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.069 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.069 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.326 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.326 10:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.584 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:12.584 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:12.584 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:12.842 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:13.099 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:13.099 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:13.099 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.357 10:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:13.615 request: 00:17:13.615 { 00:17:13.615 "name": "nvme0", 00:17:13.615 "dhchap_key": "key1", 00:17:13.615 "dhchap_ctrlr_key": "key3", 00:17:13.615 "method": "bdev_nvme_set_keys", 00:17:13.615 "req_id": 1 00:17:13.615 } 00:17:13.615 Got JSON-RPC error response 00:17:13.615 response: 00:17:13.615 { 00:17:13.615 "code": -13, 00:17:13.615 "message": "Permission denied" 00:17:13.615 } 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:13.872 10:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.245 10:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.810 nvme0n1 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:15.810 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:16.374 request: 00:17:16.374 { 00:17:16.374 "name": "nvme0", 00:17:16.374 "dhchap_key": "key2", 00:17:16.374 "dhchap_ctrlr_key": "key0", 00:17:16.374 "method": "bdev_nvme_set_keys", 00:17:16.374 "req_id": 1 00:17:16.374 } 00:17:16.374 Got JSON-RPC error response 00:17:16.374 response: 00:17:16.374 { 00:17:16.374 "code": -13, 00:17:16.374 "message": "Permission denied" 00:17:16.374 } 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:16.374 10:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.631 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:16.631 10:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:17.563 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:17.563 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:17.563 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2664226 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2664226 ']' 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2664226 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2664226 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2664226' 00:17:17.820 killing process with pid 2664226 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2664226 00:17:17.820 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2664226 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.078 rmmod nvme_tcp 00:17:18.078 rmmod nvme_fabrics 00:17:18.078 rmmod nvme_keyring 00:17:18.078 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2686234 ']' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2686234 ']' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2686234' 00:17:18.336 killing process with pid 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2686234 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.336 10:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.YdF /tmp/spdk.key-sha256.SPK /tmp/spdk.key-sha384.5O4 /tmp/spdk.key-sha512.Q7E /tmp/spdk.key-sha512.lq4 /tmp/spdk.key-sha384.cXc /tmp/spdk.key-sha256.sx6 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:20.870 00:17:20.870 real 2m31.620s 00:17:20.870 user 5m50.320s 00:17:20.870 sys 0m23.725s 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.870 ************************************ 00:17:20.870 END TEST nvmf_auth_target 00:17:20.870 ************************************ 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.870 ************************************ 00:17:20.870 START TEST nvmf_bdevio_no_huge 00:17:20.870 ************************************ 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:20.870 * Looking for test storage... 00:17:20.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.870 --rc genhtml_branch_coverage=1 00:17:20.870 --rc genhtml_function_coverage=1 00:17:20.870 --rc genhtml_legend=1 00:17:20.870 --rc geninfo_all_blocks=1 00:17:20.870 --rc geninfo_unexecuted_blocks=1 00:17:20.870 00:17:20.870 ' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.870 --rc genhtml_branch_coverage=1 00:17:20.870 --rc genhtml_function_coverage=1 00:17:20.870 --rc genhtml_legend=1 00:17:20.870 --rc geninfo_all_blocks=1 00:17:20.870 --rc geninfo_unexecuted_blocks=1 00:17:20.870 00:17:20.870 ' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.870 --rc genhtml_branch_coverage=1 00:17:20.870 --rc genhtml_function_coverage=1 00:17:20.870 --rc genhtml_legend=1 00:17:20.870 --rc geninfo_all_blocks=1 00:17:20.870 --rc geninfo_unexecuted_blocks=1 00:17:20.870 00:17:20.870 ' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:20.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.870 --rc genhtml_branch_coverage=1 00:17:20.870 --rc genhtml_function_coverage=1 00:17:20.870 --rc genhtml_legend=1 00:17:20.870 --rc geninfo_all_blocks=1 00:17:20.870 --rc geninfo_unexecuted_blocks=1 00:17:20.870 00:17:20.870 ' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.870 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:20.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:20.871 10:45:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:26.144 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:26.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:26.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:26.145 Found net devices under 0000:86:00.0: cvl_0_0 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:26.145 Found net devices under 0000:86:00.1: cvl_0_1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:26.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:17:26.145 00:17:26.145 --- 10.0.0.2 ping statistics --- 00:17:26.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.145 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:26.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:17:26.145 00:17:26.145 --- 10.0.0.1 ping statistics --- 00:17:26.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.145 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2692904 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2692904 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2692904 ']' 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.145 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:26.145 [2024-11-07 10:45:53.398560] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:26.146 [2024-11-07 10:45:53.398610] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:26.146 [2024-11-07 10:45:53.473465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:26.146 [2024-11-07 10:45:53.521088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.146 [2024-11-07 10:45:53.521124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.146 [2024-11-07 10:45:53.521131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.146 [2024-11-07 10:45:53.521137] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.146 [2024-11-07 10:45:53.521142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.146 [2024-11-07 10:45:53.522418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:26.146 [2024-11-07 10:45:53.522526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:26.146 [2024-11-07 10:45:53.522632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:26.146 [2024-11-07 10:45:53.522632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 [2024-11-07 10:45:53.668795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 Malloc0 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:26.146 [2024-11-07 10:45:53.705066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:26.146 { 00:17:26.146 "params": { 00:17:26.146 "name": "Nvme$subsystem", 00:17:26.146 "trtype": "$TEST_TRANSPORT", 00:17:26.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:26.146 "adrfam": "ipv4", 00:17:26.146 "trsvcid": "$NVMF_PORT", 00:17:26.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:26.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:26.146 "hdgst": ${hdgst:-false}, 00:17:26.146 "ddgst": ${ddgst:-false} 00:17:26.146 }, 00:17:26.146 "method": "bdev_nvme_attach_controller" 00:17:26.146 } 00:17:26.146 EOF 00:17:26.146 )") 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:26.146 10:45:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:26.146 "params": { 00:17:26.146 "name": "Nvme1", 00:17:26.146 "trtype": "tcp", 00:17:26.146 "traddr": "10.0.0.2", 00:17:26.146 "adrfam": "ipv4", 00:17:26.146 "trsvcid": "4420", 00:17:26.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:26.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:26.146 "hdgst": false, 00:17:26.146 "ddgst": false 00:17:26.146 }, 00:17:26.146 "method": "bdev_nvme_attach_controller" 00:17:26.146 }' 00:17:26.146 [2024-11-07 10:45:53.752980] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:26.146 [2024-11-07 10:45:53.753024] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2693004 ] 00:17:26.404 [2024-11-07 10:45:53.820571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.404 [2024-11-07 10:45:53.870327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.404 [2024-11-07 10:45:53.870423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.404 [2024-11-07 10:45:53.870424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.662 I/O targets: 00:17:26.662 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:26.662 00:17:26.662 00:17:26.662 CUnit - A unit testing framework for C - Version 2.1-3 00:17:26.662 http://cunit.sourceforge.net/ 00:17:26.662 00:17:26.662 00:17:26.662 Suite: bdevio tests on: Nvme1n1 00:17:26.662 Test: blockdev write read block ...passed 00:17:26.662 Test: blockdev write zeroes read block ...passed 00:17:26.662 Test: blockdev write zeroes read no split ...passed 00:17:26.662 Test: blockdev write zeroes read split ...passed 00:17:26.920 Test: blockdev write zeroes read split partial ...passed 00:17:26.920 Test: blockdev reset ...[2024-11-07 10:45:54.359781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:26.920 [2024-11-07 10:45:54.359847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f88e0 (9): Bad file descriptor 00:17:26.920 [2024-11-07 10:45:54.371470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:26.920 passed 00:17:26.920 Test: blockdev write read 8 blocks ...passed 00:17:26.920 Test: blockdev write read size > 128k ...passed 00:17:26.920 Test: blockdev write read invalid size ...passed 00:17:26.920 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:26.920 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:26.920 Test: blockdev write read max offset ...passed 00:17:26.920 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:26.920 Test: blockdev writev readv 8 blocks ...passed 00:17:26.920 Test: blockdev writev readv 30 x 1block ...passed 00:17:26.920 Test: blockdev writev readv block ...passed 00:17:26.920 Test: blockdev writev readv size > 128k ...passed 00:17:26.920 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:26.920 Test: blockdev comparev and writev ...[2024-11-07 10:45:54.581215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.581259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.581523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.581546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.581789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.581811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.581819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.582057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.582068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:26.920 [2024-11-07 10:45:54.582097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:26.920 [2024-11-07 10:45:54.582105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:27.179 passed 00:17:27.179 Test: blockdev nvme passthru rw ...passed 00:17:27.179 Test: blockdev nvme passthru vendor specific ...[2024-11-07 10:45:54.663817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.179 [2024-11-07 10:45:54.663841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:27.179 [2024-11-07 10:45:54.663964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.179 [2024-11-07 10:45:54.663974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:27.179 [2024-11-07 10:45:54.664083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.179 [2024-11-07 10:45:54.664093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:27.179 [2024-11-07 10:45:54.664201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:27.179 [2024-11-07 10:45:54.664212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:27.179 passed 00:17:27.179 Test: blockdev nvme admin passthru ...passed 00:17:27.179 Test: blockdev copy ...passed 00:17:27.179 00:17:27.179 Run Summary: Type Total Ran Passed Failed Inactive 00:17:27.179 suites 1 1 n/a 0 0 00:17:27.179 tests 23 23 23 0 0 00:17:27.179 asserts 152 152 152 0 n/a 00:17:27.179 00:17:27.179 Elapsed time = 1.037 seconds 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.437 10:45:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.437 rmmod nvme_tcp 00:17:27.437 rmmod nvme_fabrics 00:17:27.437 rmmod nvme_keyring 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2692904 ']' 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2692904 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2692904 ']' 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2692904 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2692904 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2692904' 00:17:27.437 killing process with pid 2692904 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2692904 00:17:27.437 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2692904 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.005 10:45:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:29.910 00:17:29.910 real 0m9.363s 00:17:29.910 user 0m11.037s 00:17:29.910 sys 0m4.666s 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:29.910 ************************************ 00:17:29.910 END TEST nvmf_bdevio_no_huge 00:17:29.910 ************************************ 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.910 ************************************ 00:17:29.910 START TEST nvmf_tls 00:17:29.910 ************************************ 00:17:29.910 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:30.169 * Looking for test storage... 00:17:30.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.169 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:30.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.170 --rc genhtml_branch_coverage=1 00:17:30.170 --rc genhtml_function_coverage=1 00:17:30.170 --rc genhtml_legend=1 00:17:30.170 --rc geninfo_all_blocks=1 00:17:30.170 --rc geninfo_unexecuted_blocks=1 00:17:30.170 00:17:30.170 ' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:30.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.170 --rc genhtml_branch_coverage=1 00:17:30.170 --rc genhtml_function_coverage=1 00:17:30.170 --rc genhtml_legend=1 00:17:30.170 --rc geninfo_all_blocks=1 00:17:30.170 --rc geninfo_unexecuted_blocks=1 00:17:30.170 00:17:30.170 ' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:30.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.170 --rc genhtml_branch_coverage=1 00:17:30.170 --rc genhtml_function_coverage=1 00:17:30.170 --rc genhtml_legend=1 00:17:30.170 --rc geninfo_all_blocks=1 00:17:30.170 --rc geninfo_unexecuted_blocks=1 00:17:30.170 00:17:30.170 ' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:30.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.170 --rc genhtml_branch_coverage=1 00:17:30.170 --rc genhtml_function_coverage=1 00:17:30.170 --rc genhtml_legend=1 00:17:30.170 --rc geninfo_all_blocks=1 00:17:30.170 --rc geninfo_unexecuted_blocks=1 00:17:30.170 00:17:30.170 ' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.170 10:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.733 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.734 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.734 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.734 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.734 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:36.735 00:17:36.735 --- 10.0.0.2 ping statistics --- 00:17:36.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.735 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:17:36.735 00:17:36.735 --- 10.0.0.1 ping statistics --- 00:17:36.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.735 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2696809 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2696809 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2696809 ']' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.735 [2024-11-07 10:46:03.656573] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:36.735 [2024-11-07 10:46:03.656621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.735 [2024-11-07 10:46:03.725601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.735 [2024-11-07 10:46:03.767128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.735 [2024-11-07 10:46:03.767169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.735 [2024-11-07 10:46:03.767177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.735 [2024-11-07 10:46:03.767183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.735 [2024-11-07 10:46:03.767188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.735 [2024-11-07 10:46:03.767761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:36.735 10:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:36.735 true 00:17:36.735 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:36.735 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.735 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:36.735 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:36.735 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:36.994 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.994 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:36.994 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:36.994 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:36.994 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:37.253 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.253 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:37.511 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:37.511 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:37.511 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.511 10:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:37.511 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:37.511 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:37.511 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:37.770 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.770 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:38.028 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:38.028 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:38.028 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:38.286 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.286 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:38.286 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:38.286 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:38.286 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:38.287 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:38.544 10:46:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.QBgCjCH6QZ 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.INp14GmSFC 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QBgCjCH6QZ 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.INp14GmSFC 00:17:38.544 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:38.803 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:39.061 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.QBgCjCH6QZ 00:17:39.061 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.QBgCjCH6QZ 00:17:39.061 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:39.061 [2024-11-07 10:46:06.653576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.062 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.320 10:46:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:39.578 [2024-11-07 10:46:07.018513] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.578 [2024-11-07 10:46:07.018749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.578 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:39.578 malloc0 00:17:39.578 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:39.837 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.QBgCjCH6QZ 00:17:40.095 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:40.353 10:46:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.QBgCjCH6QZ 00:17:50.321 Initializing NVMe Controllers 00:17:50.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.322 Initialization complete. Launching workers. 00:17:50.322 ======================================================== 00:17:50.322 Latency(us) 00:17:50.322 Device Information : IOPS MiB/s Average min max 00:17:50.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16386.15 64.01 3905.86 858.07 5789.77 00:17:50.322 ======================================================== 00:17:50.322 Total : 16386.15 64.01 3905.86 858.07 5789.77 00:17:50.322 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QBgCjCH6QZ 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QBgCjCH6QZ 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2699255 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2699255 /var/tmp/bdevperf.sock 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2699255 ']' 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.322 10:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.322 [2024-11-07 10:46:17.942702] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:17:50.322 [2024-11-07 10:46:17.942754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2699255 ] 00:17:50.580 [2024-11-07 10:46:18.002727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.580 [2024-11-07 10:46:18.045731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.580 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.580 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:50.580 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QBgCjCH6QZ 00:17:50.838 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.838 [2024-11-07 10:46:18.497569] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.097 TLSTESTn1 00:17:51.097 10:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:51.097 Running I/O for 10 seconds... 00:17:53.411 4870.00 IOPS, 19.02 MiB/s [2024-11-07T09:46:22.038Z] 4642.00 IOPS, 18.13 MiB/s [2024-11-07T09:46:22.971Z] 4570.33 IOPS, 17.85 MiB/s [2024-11-07T09:46:23.993Z] 4533.00 IOPS, 17.71 MiB/s [2024-11-07T09:46:24.961Z] 4520.20 IOPS, 17.66 MiB/s [2024-11-07T09:46:25.896Z] 4510.83 IOPS, 17.62 MiB/s [2024-11-07T09:46:26.830Z] 4497.71 IOPS, 17.57 MiB/s [2024-11-07T09:46:27.765Z] 4474.12 IOPS, 17.48 MiB/s [2024-11-07T09:46:29.137Z] 4452.56 IOPS, 17.39 MiB/s [2024-11-07T09:46:29.137Z] 4425.70 IOPS, 17.29 MiB/s 00:18:01.466 Latency(us) 00:18:01.466 [2024-11-07T09:46:29.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.466 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.466 Verification LBA range: start 0x0 length 0x2000 00:18:01.466 TLSTESTn1 : 10.02 4430.36 17.31 0.00 0.00 28850.34 4729.99 32141.13 00:18:01.466 [2024-11-07T09:46:29.137Z] =================================================================================================================== 00:18:01.466 [2024-11-07T09:46:29.137Z] Total : 4430.36 17.31 0.00 0.00 28850.34 4729.99 32141.13 00:18:01.466 { 00:18:01.466 "results": [ 00:18:01.466 { 00:18:01.466 "job": "TLSTESTn1", 00:18:01.466 "core_mask": "0x4", 00:18:01.466 "workload": "verify", 00:18:01.466 "status": "finished", 00:18:01.466 "verify_range": { 00:18:01.466 "start": 0, 00:18:01.466 "length": 8192 00:18:01.466 }, 00:18:01.466 "queue_depth": 128, 00:18:01.466 "io_size": 4096, 00:18:01.466 "runtime": 10.018375, 00:18:01.466 "iops": 4430.359214942543, 00:18:01.466 "mibps": 17.30609068336931, 00:18:01.466 "io_failed": 0, 00:18:01.466 "io_timeout": 0, 00:18:01.466 "avg_latency_us": 28850.339251392215, 00:18:01.466 "min_latency_us": 4729.989565217391, 00:18:01.466 "max_latency_us": 32141.13391304348 00:18:01.466 } 00:18:01.466 ], 00:18:01.466 "core_count": 1 00:18:01.466 } 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2699255 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2699255 ']' 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2699255 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2699255 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2699255' 00:18:01.466 killing process with pid 2699255 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2699255 00:18:01.466 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.466 00:18:01.466 Latency(us) 00:18:01.466 [2024-11-07T09:46:29.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.466 [2024-11-07T09:46:29.137Z] =================================================================================================================== 00:18:01.466 [2024-11-07T09:46:29.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2699255 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.INp14GmSFC 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.INp14GmSFC 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.466 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.INp14GmSFC 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.INp14GmSFC 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2700923 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2700923 /var/tmp/bdevperf.sock 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2700923 ']' 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:01.467 10:46:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.467 [2024-11-07 10:46:29.014948] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:01.467 [2024-11-07 10:46:29.014996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2700923 ] 00:18:01.467 [2024-11-07 10:46:29.077366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.467 [2024-11-07 10:46:29.119107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.725 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:01.725 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:01.725 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.INp14GmSFC 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.984 [2024-11-07 10:46:29.570886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.984 [2024-11-07 10:46:29.575673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:01.984 [2024-11-07 10:46:29.576303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3a180 (107): Transport endpoint is not connected 00:18:01.984 [2024-11-07 10:46:29.577296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3a180 (9): Bad file descriptor 00:18:01.984 [2024-11-07 10:46:29.578298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:01.984 [2024-11-07 10:46:29.578312] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:01.984 [2024-11-07 10:46:29.578320] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:01.984 [2024-11-07 10:46:29.578328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:01.984 request: 00:18:01.984 { 00:18:01.984 "name": "TLSTEST", 00:18:01.984 "trtype": "tcp", 00:18:01.984 "traddr": "10.0.0.2", 00:18:01.984 "adrfam": "ipv4", 00:18:01.984 "trsvcid": "4420", 00:18:01.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:01.984 "prchk_reftag": false, 00:18:01.984 "prchk_guard": false, 00:18:01.984 "hdgst": false, 00:18:01.984 "ddgst": false, 00:18:01.984 "psk": "key0", 00:18:01.984 "allow_unrecognized_csi": false, 00:18:01.984 "method": "bdev_nvme_attach_controller", 00:18:01.984 "req_id": 1 00:18:01.984 } 00:18:01.984 Got JSON-RPC error response 00:18:01.984 response: 00:18:01.984 { 00:18:01.984 "code": -5, 00:18:01.984 "message": "Input/output error" 00:18:01.984 } 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2700923 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2700923 ']' 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2700923 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2700923 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2700923' 00:18:01.984 killing process with pid 2700923 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2700923 00:18:01.984 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.984 00:18:01.984 Latency(us) 00:18:01.984 [2024-11-07T09:46:29.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.984 [2024-11-07T09:46:29.655Z] =================================================================================================================== 00:18:01.984 [2024-11-07T09:46:29.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:01.984 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2700923 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QBgCjCH6QZ 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QBgCjCH6QZ 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.QBgCjCH6QZ 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QBgCjCH6QZ 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2701115 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2701115 /var/tmp/bdevperf.sock 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2701115 ']' 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:02.241 10:46:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.241 [2024-11-07 10:46:29.850304] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:02.241 [2024-11-07 10:46:29.850354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701115 ] 00:18:02.241 [2024-11-07 10:46:29.909338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.498 [2024-11-07 10:46:29.946645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.498 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:02.498 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:02.498 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QBgCjCH6QZ 00:18:02.755 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:02.755 [2024-11-07 10:46:30.414354] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.013 [2024-11-07 10:46:30.425096] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.013 [2024-11-07 10:46:30.425121] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:03.013 [2024-11-07 10:46:30.425144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.013 [2024-11-07 10:46:30.425770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13180 (107): Transport endpoint is not connected 00:18:03.013 [2024-11-07 10:46:30.426761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd13180 (9): Bad file descriptor 00:18:03.013 [2024-11-07 10:46:30.427763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:03.013 [2024-11-07 10:46:30.427774] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:03.013 [2024-11-07 10:46:30.427782] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:03.013 [2024-11-07 10:46:30.427791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:03.013 request: 00:18:03.013 { 00:18:03.013 "name": "TLSTEST", 00:18:03.013 "trtype": "tcp", 00:18:03.013 "traddr": "10.0.0.2", 00:18:03.013 "adrfam": "ipv4", 00:18:03.013 "trsvcid": "4420", 00:18:03.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.013 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:03.013 "prchk_reftag": false, 00:18:03.013 "prchk_guard": false, 00:18:03.013 "hdgst": false, 00:18:03.013 "ddgst": false, 00:18:03.013 "psk": "key0", 00:18:03.013 "allow_unrecognized_csi": false, 00:18:03.013 "method": "bdev_nvme_attach_controller", 00:18:03.013 "req_id": 1 00:18:03.013 } 00:18:03.013 Got JSON-RPC error response 00:18:03.013 response: 00:18:03.013 { 00:18:03.013 "code": -5, 00:18:03.013 "message": "Input/output error" 00:18:03.013 } 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2701115 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2701115 ']' 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2701115 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2701115 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2701115' 00:18:03.013 killing process with pid 2701115 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2701115 00:18:03.013 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.013 00:18:03.013 Latency(us) 00:18:03.013 [2024-11-07T09:46:30.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.013 [2024-11-07T09:46:30.684Z] =================================================================================================================== 00:18:03.013 [2024-11-07T09:46:30.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2701115 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QBgCjCH6QZ 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QBgCjCH6QZ 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:03.013 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.QBgCjCH6QZ 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QBgCjCH6QZ 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2701344 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2701344 /var/tmp/bdevperf.sock 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2701344 ']' 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:03.014 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.272 [2024-11-07 10:46:30.704048] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:03.272 [2024-11-07 10:46:30.704098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701344 ] 00:18:03.272 [2024-11-07 10:46:30.763616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.272 [2024-11-07 10:46:30.801443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.272 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:03.272 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:03.272 10:46:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QBgCjCH6QZ 00:18:03.531 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.790 [2024-11-07 10:46:31.248612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.790 [2024-11-07 10:46:31.257208] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.790 [2024-11-07 10:46:31.257231] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.790 [2024-11-07 10:46:31.257256] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.790 [2024-11-07 10:46:31.258078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3f180 (107): Transport endpoint is not connected 00:18:03.790 [2024-11-07 10:46:31.259072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b3f180 (9): Bad file descriptor 00:18:03.790 [2024-11-07 10:46:31.260073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:03.790 [2024-11-07 10:46:31.260084] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:03.790 [2024-11-07 10:46:31.260091] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:03.790 [2024-11-07 10:46:31.260099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:03.790 request: 00:18:03.790 { 00:18:03.790 "name": "TLSTEST", 00:18:03.790 "trtype": "tcp", 00:18:03.790 "traddr": "10.0.0.2", 00:18:03.790 "adrfam": "ipv4", 00:18:03.790 "trsvcid": "4420", 00:18:03.790 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:03.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.790 "prchk_reftag": false, 00:18:03.790 "prchk_guard": false, 00:18:03.790 "hdgst": false, 00:18:03.790 "ddgst": false, 00:18:03.790 "psk": "key0", 00:18:03.790 "allow_unrecognized_csi": false, 00:18:03.790 "method": "bdev_nvme_attach_controller", 00:18:03.790 "req_id": 1 00:18:03.790 } 00:18:03.790 Got JSON-RPC error response 00:18:03.790 response: 00:18:03.790 { 00:18:03.790 "code": -5, 00:18:03.790 "message": "Input/output error" 00:18:03.790 } 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2701344 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2701344 ']' 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2701344 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2701344 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2701344' 00:18:03.790 killing process with pid 2701344 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2701344 00:18:03.790 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.790 00:18:03.790 Latency(us) 00:18:03.790 [2024-11-07T09:46:31.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.790 [2024-11-07T09:46:31.461Z] =================================================================================================================== 00:18:03.790 [2024-11-07T09:46:31.461Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.790 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2701344 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2701364 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2701364 /var/tmp/bdevperf.sock 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2701364 ']' 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 [2024-11-07 10:46:31.530965] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:04.049 [2024-11-07 10:46:31.531015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701364 ] 00:18:04.049 [2024-11-07 10:46:31.589531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.049 [2024-11-07 10:46:31.627116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:04.049 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:04.307 [2024-11-07 10:46:31.892943] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:04.307 [2024-11-07 10:46:31.892977] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:04.307 request: 00:18:04.307 { 00:18:04.307 "name": "key0", 00:18:04.307 "path": "", 00:18:04.307 "method": "keyring_file_add_key", 00:18:04.307 "req_id": 1 00:18:04.307 } 00:18:04.307 Got JSON-RPC error response 00:18:04.307 response: 00:18:04.307 { 00:18:04.307 "code": -1, 00:18:04.307 "message": "Operation not permitted" 00:18:04.307 } 00:18:04.307 10:46:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:04.565 [2024-11-07 10:46:32.085536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.565 [2024-11-07 10:46:32.085574] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:04.565 request: 00:18:04.565 { 00:18:04.565 "name": "TLSTEST", 00:18:04.565 "trtype": "tcp", 00:18:04.565 "traddr": "10.0.0.2", 00:18:04.565 "adrfam": "ipv4", 00:18:04.565 "trsvcid": "4420", 00:18:04.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.565 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.565 "prchk_reftag": false, 00:18:04.565 "prchk_guard": false, 00:18:04.565 "hdgst": false, 00:18:04.565 "ddgst": false, 00:18:04.565 "psk": "key0", 00:18:04.565 "allow_unrecognized_csi": false, 00:18:04.565 "method": "bdev_nvme_attach_controller", 00:18:04.565 "req_id": 1 00:18:04.565 } 00:18:04.565 Got JSON-RPC error response 00:18:04.565 response: 00:18:04.565 { 00:18:04.565 "code": -126, 00:18:04.565 "message": "Required key not available" 00:18:04.565 } 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2701364 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2701364 ']' 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2701364 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2701364 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2701364' 00:18:04.565 killing process with pid 2701364 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2701364 00:18:04.565 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.565 00:18:04.565 Latency(us) 00:18:04.565 [2024-11-07T09:46:32.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.565 [2024-11-07T09:46:32.236Z] =================================================================================================================== 00:18:04.565 [2024-11-07T09:46:32.236Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.565 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2701364 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2696809 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2696809 ']' 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2696809 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2696809 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2696809' 00:18:04.824 killing process with pid 2696809 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2696809 00:18:04.824 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2696809 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KgemzkZaaV 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KgemzkZaaV 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2701611 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2701611 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2701611 ']' 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:05.083 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.083 [2024-11-07 10:46:32.621712] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:05.083 [2024-11-07 10:46:32.621757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.083 [2024-11-07 10:46:32.686060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.083 [2024-11-07 10:46:32.721631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.083 [2024-11-07 10:46:32.721670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.083 [2024-11-07 10:46:32.721678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.083 [2024-11-07 10:46:32.721684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.083 [2024-11-07 10:46:32.721690] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.083 [2024-11-07 10:46:32.722240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KgemzkZaaV 00:18:05.341 10:46:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.599 [2024-11-07 10:46:33.013479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.599 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.599 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.858 [2024-11-07 10:46:33.402521] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.858 [2024-11-07 10:46:33.402734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.858 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.115 malloc0 00:18:06.115 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.115 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:06.373 10:46:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgemzkZaaV 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KgemzkZaaV 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2701865 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2701865 /var/tmp/bdevperf.sock 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2701865 ']' 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:06.632 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.632 [2024-11-07 10:46:34.176963] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:06.632 [2024-11-07 10:46:34.177010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2701865 ] 00:18:06.632 [2024-11-07 10:46:34.236159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.632 [2024-11-07 10:46:34.277203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.890 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:06.890 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:06.890 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:06.890 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.148 [2024-11-07 10:46:34.736552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.148 TLSTESTn1 00:18:07.406 10:46:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.406 Running I/O for 10 seconds... 00:18:09.273 5447.00 IOPS, 21.28 MiB/s [2024-11-07T09:46:38.318Z] 5465.00 IOPS, 21.35 MiB/s [2024-11-07T09:46:39.252Z] 5517.00 IOPS, 21.55 MiB/s [2024-11-07T09:46:40.184Z] 5472.25 IOPS, 21.38 MiB/s [2024-11-07T09:46:41.115Z] 5480.20 IOPS, 21.41 MiB/s [2024-11-07T09:46:42.048Z] 5437.67 IOPS, 21.24 MiB/s [2024-11-07T09:46:42.981Z] 5432.00 IOPS, 21.22 MiB/s [2024-11-07T09:46:44.366Z] 5437.38 IOPS, 21.24 MiB/s [2024-11-07T09:46:45.301Z] 5440.89 IOPS, 21.25 MiB/s [2024-11-07T09:46:45.301Z] 5448.80 IOPS, 21.28 MiB/s 00:18:17.630 Latency(us) 00:18:17.630 [2024-11-07T09:46:45.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.630 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.630 Verification LBA range: start 0x0 length 0x2000 00:18:17.630 TLSTESTn1 : 10.01 5453.91 21.30 0.00 0.00 23434.41 5527.82 25302.59 00:18:17.630 [2024-11-07T09:46:45.301Z] =================================================================================================================== 00:18:17.630 [2024-11-07T09:46:45.301Z] Total : 5453.91 21.30 0.00 0.00 23434.41 5527.82 25302.59 00:18:17.630 { 00:18:17.630 "results": [ 00:18:17.630 { 00:18:17.630 "job": "TLSTESTn1", 00:18:17.630 "core_mask": "0x4", 00:18:17.630 "workload": "verify", 00:18:17.630 "status": "finished", 00:18:17.630 "verify_range": { 00:18:17.630 "start": 0, 00:18:17.630 "length": 8192 00:18:17.630 }, 00:18:17.630 "queue_depth": 128, 00:18:17.630 "io_size": 4096, 00:18:17.630 "runtime": 10.013544, 00:18:17.630 "iops": 5453.913219934921, 00:18:17.630 "mibps": 21.304348515370783, 00:18:17.630 "io_failed": 0, 00:18:17.630 "io_timeout": 0, 00:18:17.630 "avg_latency_us": 23434.414510575996, 00:18:17.630 "min_latency_us": 5527.819130434783, 00:18:17.630 "max_latency_us": 25302.594782608696 00:18:17.630 } 00:18:17.630 ], 00:18:17.630 "core_count": 1 00:18:17.630 } 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2701865 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2701865 ']' 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2701865 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:17.630 10:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2701865 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2701865' 00:18:17.630 killing process with pid 2701865 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2701865 00:18:17.630 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.630 00:18:17.630 Latency(us) 00:18:17.630 [2024-11-07T09:46:45.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.630 [2024-11-07T09:46:45.301Z] =================================================================================================================== 00:18:17.630 [2024-11-07T09:46:45.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2701865 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KgemzkZaaV 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgemzkZaaV 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgemzkZaaV 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KgemzkZaaV 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KgemzkZaaV 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2703700 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2703700 /var/tmp/bdevperf.sock 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2703700 ']' 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.630 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.630 [2024-11-07 10:46:45.233287] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:17.630 [2024-11-07 10:46:45.233334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2703700 ] 00:18:17.630 [2024-11-07 10:46:45.291483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.888 [2024-11-07 10:46:45.329305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.888 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.888 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:17.888 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:18.145 [2024-11-07 10:46:45.587439] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KgemzkZaaV': 0100666 00:18:18.146 [2024-11-07 10:46:45.587474] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:18.146 request: 00:18:18.146 { 00:18:18.146 "name": "key0", 00:18:18.146 "path": "/tmp/tmp.KgemzkZaaV", 00:18:18.146 "method": "keyring_file_add_key", 00:18:18.146 "req_id": 1 00:18:18.146 } 00:18:18.146 Got JSON-RPC error response 00:18:18.146 response: 00:18:18.146 { 00:18:18.146 "code": -1, 00:18:18.146 "message": "Operation not permitted" 00:18:18.146 } 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.146 [2024-11-07 10:46:45.771994] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.146 [2024-11-07 10:46:45.772023] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:18.146 request: 00:18:18.146 { 00:18:18.146 "name": "TLSTEST", 00:18:18.146 "trtype": "tcp", 00:18:18.146 "traddr": "10.0.0.2", 00:18:18.146 "adrfam": "ipv4", 00:18:18.146 "trsvcid": "4420", 00:18:18.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.146 "prchk_reftag": false, 00:18:18.146 "prchk_guard": false, 00:18:18.146 "hdgst": false, 00:18:18.146 "ddgst": false, 00:18:18.146 "psk": "key0", 00:18:18.146 "allow_unrecognized_csi": false, 00:18:18.146 "method": "bdev_nvme_attach_controller", 00:18:18.146 "req_id": 1 00:18:18.146 } 00:18:18.146 Got JSON-RPC error response 00:18:18.146 response: 00:18:18.146 { 00:18:18.146 "code": -126, 00:18:18.146 "message": "Required key not available" 00:18:18.146 } 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2703700 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2703700 ']' 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2703700 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:18.146 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2703700 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2703700' 00:18:18.405 killing process with pid 2703700 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2703700 00:18:18.405 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.405 00:18:18.405 Latency(us) 00:18:18.405 [2024-11-07T09:46:46.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.405 [2024-11-07T09:46:46.076Z] =================================================================================================================== 00:18:18.405 [2024-11-07T09:46:46.076Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2703700 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2701611 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2701611 ']' 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2701611 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:18.405 10:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2701611 00:18:18.405 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:18.405 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:18.405 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2701611' 00:18:18.405 killing process with pid 2701611 00:18:18.405 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2701611 00:18:18.405 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2701611 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2703940 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2703940 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2703940 ']' 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.663 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.663 [2024-11-07 10:46:46.263987] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:18.663 [2024-11-07 10:46:46.264033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.663 [2024-11-07 10:46:46.330057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.922 [2024-11-07 10:46:46.370964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.922 [2024-11-07 10:46:46.371002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.922 [2024-11-07 10:46:46.371010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.922 [2024-11-07 10:46:46.371016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.922 [2024-11-07 10:46:46.371022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.922 [2024-11-07 10:46:46.371606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KgemzkZaaV 00:18:18.922 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:19.180 [2024-11-07 10:46:46.674077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.180 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:19.438 10:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:19.438 [2024-11-07 10:46:47.047038] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.438 [2024-11-07 10:46:47.047235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.438 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.696 malloc0 00:18:19.696 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.977 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:19.977 [2024-11-07 10:46:47.608650] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KgemzkZaaV': 0100666 00:18:19.977 [2024-11-07 10:46:47.608677] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:19.977 request: 00:18:19.977 { 00:18:19.977 "name": "key0", 00:18:19.977 "path": "/tmp/tmp.KgemzkZaaV", 00:18:19.977 "method": "keyring_file_add_key", 00:18:19.977 "req_id": 1 00:18:19.977 } 00:18:19.977 Got JSON-RPC error response 00:18:19.977 response: 00:18:19.977 { 00:18:19.977 "code": -1, 00:18:19.977 "message": "Operation not permitted" 00:18:19.977 } 00:18:19.977 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:20.236 [2024-11-07 10:46:47.797164] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:20.236 [2024-11-07 10:46:47.797197] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:20.236 request: 00:18:20.236 { 00:18:20.236 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.236 "host": "nqn.2016-06.io.spdk:host1", 00:18:20.236 "psk": "key0", 00:18:20.236 "method": "nvmf_subsystem_add_host", 00:18:20.236 "req_id": 1 00:18:20.236 } 00:18:20.236 Got JSON-RPC error response 00:18:20.236 response: 00:18:20.236 { 00:18:20.236 "code": -32603, 00:18:20.236 "message": "Internal error" 00:18:20.236 } 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2703940 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2703940 ']' 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2703940 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2703940 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2703940' 00:18:20.236 killing process with pid 2703940 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2703940 00:18:20.236 10:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2703940 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KgemzkZaaV 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2704211 00:18:20.494 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2704211 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2704211 ']' 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:20.495 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.495 [2024-11-07 10:46:48.083743] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:20.495 [2024-11-07 10:46:48.083787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.495 [2024-11-07 10:46:48.149730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.753 [2024-11-07 10:46:48.191326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.753 [2024-11-07 10:46:48.191360] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.753 [2024-11-07 10:46:48.191367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.753 [2024-11-07 10:46:48.191373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.753 [2024-11-07 10:46:48.191379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.753 [2024-11-07 10:46:48.191960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KgemzkZaaV 00:18:20.753 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:21.011 [2024-11-07 10:46:48.486316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:21.011 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.269 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.269 [2024-11-07 10:46:48.871300] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.269 [2024-11-07 10:46:48.871517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.269 10:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.527 malloc0 00:18:21.527 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.784 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2704468 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2704468 /var/tmp/bdevperf.sock 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2704468 ']' 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.042 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.042 [2024-11-07 10:46:49.674760] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:22.042 [2024-11-07 10:46:49.674808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704468 ] 00:18:22.300 [2024-11-07 10:46:49.734118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.300 [2024-11-07 10:46:49.776814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.300 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:22.300 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:22.300 10:46:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:22.558 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:22.558 [2024-11-07 10:46:50.220295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.815 TLSTESTn1 00:18:22.815 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:23.073 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:23.073 "subsystems": [ 00:18:23.073 { 00:18:23.073 "subsystem": "keyring", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "keyring_file_add_key", 00:18:23.073 "params": { 00:18:23.073 "name": "key0", 00:18:23.073 "path": "/tmp/tmp.KgemzkZaaV" 00:18:23.073 } 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "iobuf", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "iobuf_set_options", 00:18:23.073 "params": { 00:18:23.073 "small_pool_count": 8192, 00:18:23.073 "large_pool_count": 1024, 00:18:23.073 "small_bufsize": 8192, 00:18:23.073 "large_bufsize": 135168, 00:18:23.073 "enable_numa": false 00:18:23.073 } 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "sock", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "sock_set_default_impl", 00:18:23.073 "params": { 00:18:23.073 "impl_name": "posix" 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "sock_impl_set_options", 00:18:23.073 "params": { 00:18:23.073 "impl_name": "ssl", 00:18:23.073 "recv_buf_size": 4096, 00:18:23.073 "send_buf_size": 4096, 00:18:23.073 "enable_recv_pipe": true, 00:18:23.073 "enable_quickack": false, 00:18:23.073 "enable_placement_id": 0, 00:18:23.073 "enable_zerocopy_send_server": true, 00:18:23.073 "enable_zerocopy_send_client": false, 00:18:23.073 "zerocopy_threshold": 0, 00:18:23.073 "tls_version": 0, 00:18:23.073 "enable_ktls": false 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "sock_impl_set_options", 00:18:23.073 "params": { 00:18:23.073 "impl_name": "posix", 00:18:23.073 "recv_buf_size": 2097152, 00:18:23.073 "send_buf_size": 2097152, 00:18:23.073 "enable_recv_pipe": true, 00:18:23.073 "enable_quickack": false, 00:18:23.073 "enable_placement_id": 0, 00:18:23.073 "enable_zerocopy_send_server": true, 00:18:23.073 "enable_zerocopy_send_client": false, 00:18:23.073 "zerocopy_threshold": 0, 00:18:23.073 "tls_version": 0, 00:18:23.073 "enable_ktls": false 00:18:23.073 } 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "vmd", 00:18:23.073 "config": [] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "accel", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "accel_set_options", 00:18:23.073 "params": { 00:18:23.073 "small_cache_size": 128, 00:18:23.073 "large_cache_size": 16, 00:18:23.073 "task_count": 2048, 00:18:23.073 "sequence_count": 2048, 00:18:23.073 "buf_count": 2048 00:18:23.073 } 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "bdev", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "bdev_set_options", 00:18:23.073 "params": { 00:18:23.073 "bdev_io_pool_size": 65535, 00:18:23.073 "bdev_io_cache_size": 256, 00:18:23.073 "bdev_auto_examine": true, 00:18:23.073 "iobuf_small_cache_size": 128, 00:18:23.073 "iobuf_large_cache_size": 16 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_raid_set_options", 00:18:23.073 "params": { 00:18:23.073 "process_window_size_kb": 1024, 00:18:23.073 "process_max_bandwidth_mb_sec": 0 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_iscsi_set_options", 00:18:23.073 "params": { 00:18:23.073 "timeout_sec": 30 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_nvme_set_options", 00:18:23.073 "params": { 00:18:23.073 "action_on_timeout": "none", 00:18:23.073 "timeout_us": 0, 00:18:23.073 "timeout_admin_us": 0, 00:18:23.073 "keep_alive_timeout_ms": 10000, 00:18:23.073 "arbitration_burst": 0, 00:18:23.073 "low_priority_weight": 0, 00:18:23.073 "medium_priority_weight": 0, 00:18:23.073 "high_priority_weight": 0, 00:18:23.073 "nvme_adminq_poll_period_us": 10000, 00:18:23.073 "nvme_ioq_poll_period_us": 0, 00:18:23.073 "io_queue_requests": 0, 00:18:23.073 "delay_cmd_submit": true, 00:18:23.073 "transport_retry_count": 4, 00:18:23.073 "bdev_retry_count": 3, 00:18:23.073 "transport_ack_timeout": 0, 00:18:23.073 "ctrlr_loss_timeout_sec": 0, 00:18:23.073 "reconnect_delay_sec": 0, 00:18:23.073 "fast_io_fail_timeout_sec": 0, 00:18:23.073 "disable_auto_failback": false, 00:18:23.073 "generate_uuids": false, 00:18:23.073 "transport_tos": 0, 00:18:23.073 "nvme_error_stat": false, 00:18:23.073 "rdma_srq_size": 0, 00:18:23.073 "io_path_stat": false, 00:18:23.073 "allow_accel_sequence": false, 00:18:23.073 "rdma_max_cq_size": 0, 00:18:23.073 "rdma_cm_event_timeout_ms": 0, 00:18:23.073 "dhchap_digests": [ 00:18:23.073 "sha256", 00:18:23.073 "sha384", 00:18:23.073 "sha512" 00:18:23.073 ], 00:18:23.073 "dhchap_dhgroups": [ 00:18:23.073 "null", 00:18:23.073 "ffdhe2048", 00:18:23.073 "ffdhe3072", 00:18:23.073 "ffdhe4096", 00:18:23.073 "ffdhe6144", 00:18:23.073 "ffdhe8192" 00:18:23.073 ] 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_nvme_set_hotplug", 00:18:23.073 "params": { 00:18:23.073 "period_us": 100000, 00:18:23.073 "enable": false 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_malloc_create", 00:18:23.073 "params": { 00:18:23.073 "name": "malloc0", 00:18:23.073 "num_blocks": 8192, 00:18:23.073 "block_size": 4096, 00:18:23.073 "physical_block_size": 4096, 00:18:23.073 "uuid": "2878f5e9-200e-4439-8f7e-a11c7fee403b", 00:18:23.073 "optimal_io_boundary": 0, 00:18:23.073 "md_size": 0, 00:18:23.073 "dif_type": 0, 00:18:23.073 "dif_is_head_of_md": false, 00:18:23.073 "dif_pi_format": 0 00:18:23.073 } 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "method": "bdev_wait_for_examine" 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "nbd", 00:18:23.073 "config": [] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "scheduler", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "framework_set_scheduler", 00:18:23.073 "params": { 00:18:23.073 "name": "static" 00:18:23.073 } 00:18:23.073 } 00:18:23.073 ] 00:18:23.073 }, 00:18:23.073 { 00:18:23.073 "subsystem": "nvmf", 00:18:23.073 "config": [ 00:18:23.073 { 00:18:23.073 "method": "nvmf_set_config", 00:18:23.073 "params": { 00:18:23.073 "discovery_filter": "match_any", 00:18:23.073 "admin_cmd_passthru": { 00:18:23.073 "identify_ctrlr": false 00:18:23.073 }, 00:18:23.073 "dhchap_digests": [ 00:18:23.073 "sha256", 00:18:23.073 "sha384", 00:18:23.073 "sha512" 00:18:23.073 ], 00:18:23.073 "dhchap_dhgroups": [ 00:18:23.073 "null", 00:18:23.073 "ffdhe2048", 00:18:23.073 "ffdhe3072", 00:18:23.073 "ffdhe4096", 00:18:23.073 "ffdhe6144", 00:18:23.073 "ffdhe8192" 00:18:23.073 ] 00:18:23.073 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_set_max_subsystems", 00:18:23.074 "params": { 00:18:23.074 "max_subsystems": 1024 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_set_crdt", 00:18:23.074 "params": { 00:18:23.074 "crdt1": 0, 00:18:23.074 "crdt2": 0, 00:18:23.074 "crdt3": 0 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_create_transport", 00:18:23.074 "params": { 00:18:23.074 "trtype": "TCP", 00:18:23.074 "max_queue_depth": 128, 00:18:23.074 "max_io_qpairs_per_ctrlr": 127, 00:18:23.074 "in_capsule_data_size": 4096, 00:18:23.074 "max_io_size": 131072, 00:18:23.074 "io_unit_size": 131072, 00:18:23.074 "max_aq_depth": 128, 00:18:23.074 "num_shared_buffers": 511, 00:18:23.074 "buf_cache_size": 4294967295, 00:18:23.074 "dif_insert_or_strip": false, 00:18:23.074 "zcopy": false, 00:18:23.074 "c2h_success": false, 00:18:23.074 "sock_priority": 0, 00:18:23.074 "abort_timeout_sec": 1, 00:18:23.074 "ack_timeout": 0, 00:18:23.074 "data_wr_pool_size": 0 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_create_subsystem", 00:18:23.074 "params": { 00:18:23.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.074 "allow_any_host": false, 00:18:23.074 "serial_number": "SPDK00000000000001", 00:18:23.074 "model_number": "SPDK bdev Controller", 00:18:23.074 "max_namespaces": 10, 00:18:23.074 "min_cntlid": 1, 00:18:23.074 "max_cntlid": 65519, 00:18:23.074 "ana_reporting": false 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_subsystem_add_host", 00:18:23.074 "params": { 00:18:23.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.074 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.074 "psk": "key0" 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_subsystem_add_ns", 00:18:23.074 "params": { 00:18:23.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.074 "namespace": { 00:18:23.074 "nsid": 1, 00:18:23.074 "bdev_name": "malloc0", 00:18:23.074 "nguid": "2878F5E9200E44398F7EA11C7FEE403B", 00:18:23.074 "uuid": "2878f5e9-200e-4439-8f7e-a11c7fee403b", 00:18:23.074 "no_auto_visible": false 00:18:23.074 } 00:18:23.074 } 00:18:23.074 }, 00:18:23.074 { 00:18:23.074 "method": "nvmf_subsystem_add_listener", 00:18:23.074 "params": { 00:18:23.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.074 "listen_address": { 00:18:23.074 "trtype": "TCP", 00:18:23.074 "adrfam": "IPv4", 00:18:23.074 "traddr": "10.0.0.2", 00:18:23.074 "trsvcid": "4420" 00:18:23.074 }, 00:18:23.074 "secure_channel": true 00:18:23.074 } 00:18:23.074 } 00:18:23.074 ] 00:18:23.074 } 00:18:23.074 ] 00:18:23.074 }' 00:18:23.074 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:23.334 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:23.334 "subsystems": [ 00:18:23.334 { 00:18:23.334 "subsystem": "keyring", 00:18:23.334 "config": [ 00:18:23.334 { 00:18:23.334 "method": "keyring_file_add_key", 00:18:23.334 "params": { 00:18:23.334 "name": "key0", 00:18:23.334 "path": "/tmp/tmp.KgemzkZaaV" 00:18:23.334 } 00:18:23.334 } 00:18:23.334 ] 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "subsystem": "iobuf", 00:18:23.334 "config": [ 00:18:23.334 { 00:18:23.334 "method": "iobuf_set_options", 00:18:23.334 "params": { 00:18:23.334 "small_pool_count": 8192, 00:18:23.334 "large_pool_count": 1024, 00:18:23.334 "small_bufsize": 8192, 00:18:23.334 "large_bufsize": 135168, 00:18:23.334 "enable_numa": false 00:18:23.334 } 00:18:23.334 } 00:18:23.334 ] 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "subsystem": "sock", 00:18:23.334 "config": [ 00:18:23.334 { 00:18:23.334 "method": "sock_set_default_impl", 00:18:23.334 "params": { 00:18:23.334 "impl_name": "posix" 00:18:23.334 } 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "method": "sock_impl_set_options", 00:18:23.334 "params": { 00:18:23.334 "impl_name": "ssl", 00:18:23.334 "recv_buf_size": 4096, 00:18:23.334 "send_buf_size": 4096, 00:18:23.334 "enable_recv_pipe": true, 00:18:23.334 "enable_quickack": false, 00:18:23.334 "enable_placement_id": 0, 00:18:23.334 "enable_zerocopy_send_server": true, 00:18:23.334 "enable_zerocopy_send_client": false, 00:18:23.334 "zerocopy_threshold": 0, 00:18:23.334 "tls_version": 0, 00:18:23.334 "enable_ktls": false 00:18:23.334 } 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "method": "sock_impl_set_options", 00:18:23.334 "params": { 00:18:23.334 "impl_name": "posix", 00:18:23.334 "recv_buf_size": 2097152, 00:18:23.334 "send_buf_size": 2097152, 00:18:23.334 "enable_recv_pipe": true, 00:18:23.334 "enable_quickack": false, 00:18:23.334 "enable_placement_id": 0, 00:18:23.334 "enable_zerocopy_send_server": true, 00:18:23.334 "enable_zerocopy_send_client": false, 00:18:23.334 "zerocopy_threshold": 0, 00:18:23.334 "tls_version": 0, 00:18:23.334 "enable_ktls": false 00:18:23.334 } 00:18:23.334 } 00:18:23.334 ] 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "subsystem": "vmd", 00:18:23.334 "config": [] 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "subsystem": "accel", 00:18:23.334 "config": [ 00:18:23.334 { 00:18:23.334 "method": "accel_set_options", 00:18:23.334 "params": { 00:18:23.334 "small_cache_size": 128, 00:18:23.334 "large_cache_size": 16, 00:18:23.334 "task_count": 2048, 00:18:23.334 "sequence_count": 2048, 00:18:23.334 "buf_count": 2048 00:18:23.334 } 00:18:23.334 } 00:18:23.334 ] 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "subsystem": "bdev", 00:18:23.334 "config": [ 00:18:23.334 { 00:18:23.334 "method": "bdev_set_options", 00:18:23.334 "params": { 00:18:23.334 "bdev_io_pool_size": 65535, 00:18:23.334 "bdev_io_cache_size": 256, 00:18:23.334 "bdev_auto_examine": true, 00:18:23.334 "iobuf_small_cache_size": 128, 00:18:23.334 "iobuf_large_cache_size": 16 00:18:23.334 } 00:18:23.334 }, 00:18:23.334 { 00:18:23.334 "method": "bdev_raid_set_options", 00:18:23.334 "params": { 00:18:23.334 "process_window_size_kb": 1024, 00:18:23.334 "process_max_bandwidth_mb_sec": 0 00:18:23.334 } 00:18:23.334 }, 00:18:23.334 { 00:18:23.335 "method": "bdev_iscsi_set_options", 00:18:23.335 "params": { 00:18:23.335 "timeout_sec": 30 00:18:23.335 } 00:18:23.335 }, 00:18:23.335 { 00:18:23.335 "method": "bdev_nvme_set_options", 00:18:23.335 "params": { 00:18:23.335 "action_on_timeout": "none", 00:18:23.335 "timeout_us": 0, 00:18:23.335 "timeout_admin_us": 0, 00:18:23.335 "keep_alive_timeout_ms": 10000, 00:18:23.335 "arbitration_burst": 0, 00:18:23.335 "low_priority_weight": 0, 00:18:23.335 "medium_priority_weight": 0, 00:18:23.335 "high_priority_weight": 0, 00:18:23.335 "nvme_adminq_poll_period_us": 10000, 00:18:23.335 "nvme_ioq_poll_period_us": 0, 00:18:23.335 "io_queue_requests": 512, 00:18:23.335 "delay_cmd_submit": true, 00:18:23.335 "transport_retry_count": 4, 00:18:23.335 "bdev_retry_count": 3, 00:18:23.335 "transport_ack_timeout": 0, 00:18:23.335 "ctrlr_loss_timeout_sec": 0, 00:18:23.335 "reconnect_delay_sec": 0, 00:18:23.335 "fast_io_fail_timeout_sec": 0, 00:18:23.335 "disable_auto_failback": false, 00:18:23.335 "generate_uuids": false, 00:18:23.335 "transport_tos": 0, 00:18:23.335 "nvme_error_stat": false, 00:18:23.335 "rdma_srq_size": 0, 00:18:23.335 "io_path_stat": false, 00:18:23.335 "allow_accel_sequence": false, 00:18:23.335 "rdma_max_cq_size": 0, 00:18:23.335 "rdma_cm_event_timeout_ms": 0, 00:18:23.335 "dhchap_digests": [ 00:18:23.335 "sha256", 00:18:23.335 "sha384", 00:18:23.335 "sha512" 00:18:23.335 ], 00:18:23.335 "dhchap_dhgroups": [ 00:18:23.335 "null", 00:18:23.335 "ffdhe2048", 00:18:23.335 "ffdhe3072", 00:18:23.335 "ffdhe4096", 00:18:23.335 "ffdhe6144", 00:18:23.335 "ffdhe8192" 00:18:23.335 ] 00:18:23.335 } 00:18:23.335 }, 00:18:23.335 { 00:18:23.335 "method": "bdev_nvme_attach_controller", 00:18:23.335 "params": { 00:18:23.335 "name": "TLSTEST", 00:18:23.335 "trtype": "TCP", 00:18:23.335 "adrfam": "IPv4", 00:18:23.335 "traddr": "10.0.0.2", 00:18:23.335 "trsvcid": "4420", 00:18:23.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.335 "prchk_reftag": false, 00:18:23.335 "prchk_guard": false, 00:18:23.335 "ctrlr_loss_timeout_sec": 0, 00:18:23.335 "reconnect_delay_sec": 0, 00:18:23.335 "fast_io_fail_timeout_sec": 0, 00:18:23.335 "psk": "key0", 00:18:23.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.335 "hdgst": false, 00:18:23.335 "ddgst": false, 00:18:23.335 "multipath": "multipath" 00:18:23.335 } 00:18:23.335 }, 00:18:23.335 { 00:18:23.335 "method": "bdev_nvme_set_hotplug", 00:18:23.335 "params": { 00:18:23.335 "period_us": 100000, 00:18:23.335 "enable": false 00:18:23.335 } 00:18:23.335 }, 00:18:23.335 { 00:18:23.335 "method": "bdev_wait_for_examine" 00:18:23.335 } 00:18:23.335 ] 00:18:23.335 }, 00:18:23.335 { 00:18:23.335 "subsystem": "nbd", 00:18:23.335 "config": [] 00:18:23.335 } 00:18:23.335 ] 00:18:23.335 }' 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2704468 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2704468 ']' 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2704468 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2704468 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2704468' 00:18:23.335 killing process with pid 2704468 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2704468 00:18:23.335 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.335 00:18:23.335 Latency(us) 00:18:23.335 [2024-11-07T09:46:51.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.335 [2024-11-07T09:46:51.006Z] =================================================================================================================== 00:18:23.335 [2024-11-07T09:46:51.006Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.335 10:46:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2704468 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2704211 ']' 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2704211' 00:18:23.594 killing process with pid 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2704211 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:23.594 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:23.594 "subsystems": [ 00:18:23.594 { 00:18:23.594 "subsystem": "keyring", 00:18:23.594 "config": [ 00:18:23.594 { 00:18:23.594 "method": "keyring_file_add_key", 00:18:23.594 "params": { 00:18:23.594 "name": "key0", 00:18:23.594 "path": "/tmp/tmp.KgemzkZaaV" 00:18:23.594 } 00:18:23.594 } 00:18:23.594 ] 00:18:23.594 }, 00:18:23.594 { 00:18:23.594 "subsystem": "iobuf", 00:18:23.594 "config": [ 00:18:23.594 { 00:18:23.594 "method": "iobuf_set_options", 00:18:23.594 "params": { 00:18:23.594 "small_pool_count": 8192, 00:18:23.594 "large_pool_count": 1024, 00:18:23.594 "small_bufsize": 8192, 00:18:23.594 "large_bufsize": 135168, 00:18:23.594 "enable_numa": false 00:18:23.594 } 00:18:23.594 } 00:18:23.594 ] 00:18:23.594 }, 00:18:23.594 { 00:18:23.594 "subsystem": "sock", 00:18:23.594 "config": [ 00:18:23.594 { 00:18:23.594 "method": "sock_set_default_impl", 00:18:23.594 "params": { 00:18:23.594 "impl_name": "posix" 00:18:23.594 } 00:18:23.594 }, 00:18:23.594 { 00:18:23.594 "method": "sock_impl_set_options", 00:18:23.594 "params": { 00:18:23.594 "impl_name": "ssl", 00:18:23.594 "recv_buf_size": 4096, 00:18:23.594 "send_buf_size": 4096, 00:18:23.594 "enable_recv_pipe": true, 00:18:23.594 "enable_quickack": false, 00:18:23.594 "enable_placement_id": 0, 00:18:23.594 "enable_zerocopy_send_server": true, 00:18:23.594 "enable_zerocopy_send_client": false, 00:18:23.594 "zerocopy_threshold": 0, 00:18:23.594 "tls_version": 0, 00:18:23.594 "enable_ktls": false 00:18:23.594 } 00:18:23.594 }, 00:18:23.594 { 00:18:23.594 "method": "sock_impl_set_options", 00:18:23.594 "params": { 00:18:23.594 "impl_name": "posix", 00:18:23.594 "recv_buf_size": 2097152, 00:18:23.594 "send_buf_size": 2097152, 00:18:23.594 "enable_recv_pipe": true, 00:18:23.594 "enable_quickack": false, 00:18:23.594 "enable_placement_id": 0, 00:18:23.594 "enable_zerocopy_send_server": true, 00:18:23.594 "enable_zerocopy_send_client": false, 00:18:23.594 "zerocopy_threshold": 0, 00:18:23.594 "tls_version": 0, 00:18:23.594 "enable_ktls": false 00:18:23.595 } 00:18:23.595 } 00:18:23.595 ] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "vmd", 00:18:23.595 "config": [] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "accel", 00:18:23.595 "config": [ 00:18:23.595 { 00:18:23.595 "method": "accel_set_options", 00:18:23.595 "params": { 00:18:23.595 "small_cache_size": 128, 00:18:23.595 "large_cache_size": 16, 00:18:23.595 "task_count": 2048, 00:18:23.595 "sequence_count": 2048, 00:18:23.595 "buf_count": 2048 00:18:23.595 } 00:18:23.595 } 00:18:23.595 ] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "bdev", 00:18:23.595 "config": [ 00:18:23.595 { 00:18:23.595 "method": "bdev_set_options", 00:18:23.595 "params": { 00:18:23.595 "bdev_io_pool_size": 65535, 00:18:23.595 "bdev_io_cache_size": 256, 00:18:23.595 "bdev_auto_examine": true, 00:18:23.595 "iobuf_small_cache_size": 128, 00:18:23.595 "iobuf_large_cache_size": 16 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_raid_set_options", 00:18:23.595 "params": { 00:18:23.595 "process_window_size_kb": 1024, 00:18:23.595 "process_max_bandwidth_mb_sec": 0 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_iscsi_set_options", 00:18:23.595 "params": { 00:18:23.595 "timeout_sec": 30 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_nvme_set_options", 00:18:23.595 "params": { 00:18:23.595 "action_on_timeout": "none", 00:18:23.595 "timeout_us": 0, 00:18:23.595 "timeout_admin_us": 0, 00:18:23.595 "keep_alive_timeout_ms": 10000, 00:18:23.595 "arbitration_burst": 0, 00:18:23.595 "low_priority_weight": 0, 00:18:23.595 "medium_priority_weight": 0, 00:18:23.595 "high_priority_weight": 0, 00:18:23.595 "nvme_adminq_poll_period_us": 10000, 00:18:23.595 "nvme_ioq_poll_period_us": 0, 00:18:23.595 "io_queue_requests": 0, 00:18:23.595 "delay_cmd_submit": true, 00:18:23.595 "transport_retry_count": 4, 00:18:23.595 "bdev_retry_count": 3, 00:18:23.595 "transport_ack_timeout": 0, 00:18:23.595 "ctrlr_loss_timeout_sec": 0, 00:18:23.595 "reconnect_delay_sec": 0, 00:18:23.595 "fast_io_fail_timeout_sec": 0, 00:18:23.595 "disable_auto_failback": false, 00:18:23.595 "generate_uuids": false, 00:18:23.595 "transport_tos": 0, 00:18:23.595 "nvme_error_stat": false, 00:18:23.595 "rdma_srq_size": 0, 00:18:23.595 "io_path_stat": false, 00:18:23.595 "allow_accel_sequence": false, 00:18:23.595 "rdma_max_cq_size": 0, 00:18:23.595 "rdma_cm_event_timeout_ms": 0, 00:18:23.595 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.595 "dhchap_digests": [ 00:18:23.595 "sha256", 00:18:23.595 "sha384", 00:18:23.595 "sha512" 00:18:23.595 ], 00:18:23.595 "dhchap_dhgroups": [ 00:18:23.595 "null", 00:18:23.595 "ffdhe2048", 00:18:23.595 "ffdhe3072", 00:18:23.595 "ffdhe4096", 00:18:23.595 "ffdhe6144", 00:18:23.595 "ffdhe8192" 00:18:23.595 ] 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_nvme_set_hotplug", 00:18:23.595 "params": { 00:18:23.595 "period_us": 100000, 00:18:23.595 "enable": false 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_malloc_create", 00:18:23.595 "params": { 00:18:23.595 "name": "malloc0", 00:18:23.595 "num_blocks": 8192, 00:18:23.595 "block_size": 4096, 00:18:23.595 "physical_block_size": 4096, 00:18:23.595 "uuid": "2878f5e9-200e-4439-8f7e-a11c7fee403b", 00:18:23.595 "optimal_io_boundary": 0, 00:18:23.595 "md_size": 0, 00:18:23.595 "dif_type": 0, 00:18:23.595 "dif_is_head_of_md": false, 00:18:23.595 "dif_pi_format": 0 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "bdev_wait_for_examine" 00:18:23.595 } 00:18:23.595 ] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "nbd", 00:18:23.595 "config": [] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "scheduler", 00:18:23.595 "config": [ 00:18:23.595 { 00:18:23.595 "method": "framework_set_scheduler", 00:18:23.595 "params": { 00:18:23.595 "name": "static" 00:18:23.595 } 00:18:23.595 } 00:18:23.595 ] 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "subsystem": "nvmf", 00:18:23.595 "config": [ 00:18:23.595 { 00:18:23.595 "method": "nvmf_set_config", 00:18:23.595 "params": { 00:18:23.595 "discovery_filter": "match_any", 00:18:23.595 "admin_cmd_passthru": { 00:18:23.595 "identify_ctrlr": false 00:18:23.595 }, 00:18:23.595 "dhchap_digests": [ 00:18:23.595 "sha256", 00:18:23.595 "sha384", 00:18:23.595 "sha512" 00:18:23.595 ], 00:18:23.595 "dhchap_dhgroups": [ 00:18:23.595 "null", 00:18:23.595 "ffdhe2048", 00:18:23.595 "ffdhe3072", 00:18:23.595 "ffdhe4096", 00:18:23.595 "ffdhe6144", 00:18:23.595 "ffdhe8192" 00:18:23.595 ] 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_set_max_subsystems", 00:18:23.595 "params": { 00:18:23.595 "max_subsystems": 1024 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_set_crdt", 00:18:23.595 "params": { 00:18:23.595 "crdt1": 0, 00:18:23.595 "crdt2": 0, 00:18:23.595 "crdt3": 0 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_create_transport", 00:18:23.595 "params": { 00:18:23.595 "trtype": "TCP", 00:18:23.595 "max_queue_depth": 128, 00:18:23.595 "max_io_qpairs_per_ctrlr": 127, 00:18:23.595 "in_capsule_data_size": 4096, 00:18:23.595 "max_io_size": 131072, 00:18:23.595 "io_unit_size": 131072, 00:18:23.595 "max_aq_depth": 128, 00:18:23.595 "num_shared_buffers": 511, 00:18:23.595 "buf_cache_size": 4294967295, 00:18:23.595 "dif_insert_or_strip": false, 00:18:23.595 "zcopy": false, 00:18:23.595 "c2h_success": false, 00:18:23.595 "sock_priority": 0, 00:18:23.595 "abort_timeout_sec": 1, 00:18:23.595 "ack_timeout": 0, 00:18:23.595 "data_wr_pool_size": 0 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_create_subsystem", 00:18:23.595 "params": { 00:18:23.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.595 "allow_any_host": false, 00:18:23.595 "serial_number": "SPDK00000000000001", 00:18:23.595 "model_number": "SPDK bdev Controller", 00:18:23.595 "max_namespaces": 10, 00:18:23.595 "min_cntlid": 1, 00:18:23.595 "max_cntlid": 65519, 00:18:23.595 "ana_reporting": false 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_subsystem_add_host", 00:18:23.595 "params": { 00:18:23.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.595 "host": "nqn.2016-06.io.spdk:host1", 00:18:23.595 "psk": "key0" 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_subsystem_add_ns", 00:18:23.595 "params": { 00:18:23.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.595 "namespace": { 00:18:23.595 "nsid": 1, 00:18:23.595 "bdev_name": "malloc0", 00:18:23.595 "nguid": "2878F5E9200E44398F7EA11C7FEE403B", 00:18:23.595 "uuid": "2878f5e9-200e-4439-8f7e-a11c7fee403b", 00:18:23.595 "no_auto_visible": false 00:18:23.595 } 00:18:23.595 } 00:18:23.595 }, 00:18:23.595 { 00:18:23.595 "method": "nvmf_subsystem_add_listener", 00:18:23.595 "params": { 00:18:23.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.595 "listen_address": { 00:18:23.595 "trtype": "TCP", 00:18:23.595 "adrfam": "IPv4", 00:18:23.595 "traddr": "10.0.0.2", 00:18:23.595 "trsvcid": "4420" 00:18:23.595 }, 00:18:23.595 "secure_channel": true 00:18:23.595 } 00:18:23.595 } 00:18:23.595 ] 00:18:23.596 } 00:18:23.596 ] 00:18:23.596 }' 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2704713 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2704713 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2704713 ']' 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:23.596 10:46:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.854 [2024-11-07 10:46:51.293489] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:23.854 [2024-11-07 10:46:51.293535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.854 [2024-11-07 10:46:51.358789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.854 [2024-11-07 10:46:51.399951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.854 [2024-11-07 10:46:51.399988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.854 [2024-11-07 10:46:51.399997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.854 [2024-11-07 10:46:51.400003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.854 [2024-11-07 10:46:51.400009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.854 [2024-11-07 10:46:51.400610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.112 [2024-11-07 10:46:51.613650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.112 [2024-11-07 10:46:51.645665] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.112 [2024-11-07 10:46:51.645890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2704958 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2704958 /var/tmp/bdevperf.sock 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2704958 ']' 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.679 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:24.679 "subsystems": [ 00:18:24.679 { 00:18:24.679 "subsystem": "keyring", 00:18:24.679 "config": [ 00:18:24.679 { 00:18:24.679 "method": "keyring_file_add_key", 00:18:24.679 "params": { 00:18:24.679 "name": "key0", 00:18:24.679 "path": "/tmp/tmp.KgemzkZaaV" 00:18:24.679 } 00:18:24.679 } 00:18:24.679 ] 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "subsystem": "iobuf", 00:18:24.679 "config": [ 00:18:24.679 { 00:18:24.679 "method": "iobuf_set_options", 00:18:24.679 "params": { 00:18:24.679 "small_pool_count": 8192, 00:18:24.679 "large_pool_count": 1024, 00:18:24.679 "small_bufsize": 8192, 00:18:24.679 "large_bufsize": 135168, 00:18:24.679 "enable_numa": false 00:18:24.679 } 00:18:24.679 } 00:18:24.679 ] 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "subsystem": "sock", 00:18:24.679 "config": [ 00:18:24.679 { 00:18:24.679 "method": "sock_set_default_impl", 00:18:24.679 "params": { 00:18:24.679 "impl_name": "posix" 00:18:24.679 } 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "method": "sock_impl_set_options", 00:18:24.679 "params": { 00:18:24.679 "impl_name": "ssl", 00:18:24.679 "recv_buf_size": 4096, 00:18:24.679 "send_buf_size": 4096, 00:18:24.679 "enable_recv_pipe": true, 00:18:24.679 "enable_quickack": false, 00:18:24.679 "enable_placement_id": 0, 00:18:24.679 "enable_zerocopy_send_server": true, 00:18:24.679 "enable_zerocopy_send_client": false, 00:18:24.679 "zerocopy_threshold": 0, 00:18:24.679 "tls_version": 0, 00:18:24.679 "enable_ktls": false 00:18:24.679 } 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "method": "sock_impl_set_options", 00:18:24.679 "params": { 00:18:24.679 "impl_name": "posix", 00:18:24.679 "recv_buf_size": 2097152, 00:18:24.679 "send_buf_size": 2097152, 00:18:24.679 "enable_recv_pipe": true, 00:18:24.679 "enable_quickack": false, 00:18:24.679 "enable_placement_id": 0, 00:18:24.679 "enable_zerocopy_send_server": true, 00:18:24.679 "enable_zerocopy_send_client": false, 00:18:24.679 "zerocopy_threshold": 0, 00:18:24.679 "tls_version": 0, 00:18:24.679 "enable_ktls": false 00:18:24.679 } 00:18:24.679 } 00:18:24.679 ] 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "subsystem": "vmd", 00:18:24.679 "config": [] 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "subsystem": "accel", 00:18:24.679 "config": [ 00:18:24.679 { 00:18:24.679 "method": "accel_set_options", 00:18:24.679 "params": { 00:18:24.679 "small_cache_size": 128, 00:18:24.679 "large_cache_size": 16, 00:18:24.679 "task_count": 2048, 00:18:24.679 "sequence_count": 2048, 00:18:24.679 "buf_count": 2048 00:18:24.679 } 00:18:24.679 } 00:18:24.679 ] 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "subsystem": "bdev", 00:18:24.679 "config": [ 00:18:24.679 { 00:18:24.679 "method": "bdev_set_options", 00:18:24.679 "params": { 00:18:24.679 "bdev_io_pool_size": 65535, 00:18:24.679 "bdev_io_cache_size": 256, 00:18:24.679 "bdev_auto_examine": true, 00:18:24.679 "iobuf_small_cache_size": 128, 00:18:24.679 "iobuf_large_cache_size": 16 00:18:24.679 } 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "method": "bdev_raid_set_options", 00:18:24.679 "params": { 00:18:24.679 "process_window_size_kb": 1024, 00:18:24.679 "process_max_bandwidth_mb_sec": 0 00:18:24.679 } 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "method": "bdev_iscsi_set_options", 00:18:24.679 "params": { 00:18:24.679 "timeout_sec": 30 00:18:24.679 } 00:18:24.679 }, 00:18:24.679 { 00:18:24.679 "method": "bdev_nvme_set_options", 00:18:24.679 "params": { 00:18:24.679 "action_on_timeout": "none", 00:18:24.679 "timeout_us": 0, 00:18:24.679 "timeout_admin_us": 0, 00:18:24.679 "keep_alive_timeout_ms": 10000, 00:18:24.679 "arbitration_burst": 0, 00:18:24.679 "low_priority_weight": 0, 00:18:24.679 "medium_priority_weight": 0, 00:18:24.679 "high_priority_weight": 0, 00:18:24.679 "nvme_adminq_poll_period_us": 10000, 00:18:24.679 "nvme_ioq_poll_period_us": 0, 00:18:24.679 "io_queue_requests": 512, 00:18:24.679 "delay_cmd_submit": true, 00:18:24.679 "transport_retry_count": 4, 00:18:24.679 "bdev_retry_count": 3, 00:18:24.679 "transport_ack_timeout": 0, 00:18:24.679 "ctrlr_loss_timeout_sec": 0, 00:18:24.679 "reconnect_delay_sec": 0, 00:18:24.679 "fast_io_fail_timeout_sec": 0, 00:18:24.679 "disable_auto_failback": false, 00:18:24.679 "generate_uuids": false, 00:18:24.679 "transport_tos": 0, 00:18:24.679 "nvme_error_stat": false, 00:18:24.679 "rdma_srq_size": 0, 00:18:24.679 "io_path_stat": false, 00:18:24.679 "allow_accel_sequence": false, 00:18:24.679 "rdma_max_cq_size": 0, 00:18:24.679 "rdma_cm_event_timeout_ms": 0, 00:18:24.679 "dhchap_digests": [ 00:18:24.679 "sha256", 00:18:24.680 "sha384", 00:18:24.680 "sha512" 00:18:24.680 ], 00:18:24.680 "dhchap_dhgroups": [ 00:18:24.680 "null", 00:18:24.680 "ffdhe2048", 00:18:24.680 "ffdhe3072", 00:18:24.680 "ffdhe4096", 00:18:24.680 "ffdhe6144", 00:18:24.680 "ffdhe8192" 00:18:24.680 ] 00:18:24.680 } 00:18:24.680 }, 00:18:24.680 { 00:18:24.680 "method": "bdev_nvme_attach_controller", 00:18:24.680 "params": { 00:18:24.680 "name": "TLSTEST", 00:18:24.680 "trtype": "TCP", 00:18:24.680 "adrfam": "IPv4", 00:18:24.680 "traddr": "10.0.0.2", 00:18:24.680 "trsvcid": "4420", 00:18:24.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.680 "prchk_reftag": false, 00:18:24.680 "prchk_guard": false, 00:18:24.680 "ctrlr_loss_timeout_sec": 0, 00:18:24.680 "reconnect_delay_sec": 0, 00:18:24.680 "fast_io_fail_timeout_sec": 0, 00:18:24.680 "psk": "key0", 00:18:24.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.680 "hdgst": false, 00:18:24.680 "ddgst": false, 00:18:24.680 "multipath": "multipath" 00:18:24.680 } 00:18:24.680 }, 00:18:24.680 { 00:18:24.680 "method": "bdev_nvme_set_hotplug", 00:18:24.680 "params": { 00:18:24.680 "period_us": 100000, 00:18:24.680 "enable": false 00:18:24.680 } 00:18:24.680 }, 00:18:24.680 { 00:18:24.680 "method": "bdev_wait_for_examine" 00:18:24.680 } 00:18:24.680 ] 00:18:24.680 }, 00:18:24.680 { 00:18:24.680 "subsystem": "nbd", 00:18:24.680 "config": [] 00:18:24.680 } 00:18:24.680 ] 00:18:24.680 }' 00:18:24.680 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:24.680 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.680 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:24.680 10:46:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.680 [2024-11-07 10:46:52.175562] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:24.680 [2024-11-07 10:46:52.175609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2704958 ] 00:18:24.680 [2024-11-07 10:46:52.234295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.680 [2024-11-07 10:46:52.274624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.938 [2024-11-07 10:46:52.427362] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.504 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:25.504 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:25.504 10:46:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:25.504 Running I/O for 10 seconds... 00:18:27.814 5235.00 IOPS, 20.45 MiB/s [2024-11-07T09:46:56.419Z] 5357.50 IOPS, 20.93 MiB/s [2024-11-07T09:46:57.353Z] 5319.33 IOPS, 20.78 MiB/s [2024-11-07T09:46:58.287Z] 5390.00 IOPS, 21.05 MiB/s [2024-11-07T09:46:59.220Z] 5404.60 IOPS, 21.11 MiB/s [2024-11-07T09:47:00.158Z] 5435.33 IOPS, 21.23 MiB/s [2024-11-07T09:47:01.531Z] 5445.86 IOPS, 21.27 MiB/s [2024-11-07T09:47:02.465Z] 5457.88 IOPS, 21.32 MiB/s [2024-11-07T09:47:03.398Z] 5453.78 IOPS, 21.30 MiB/s [2024-11-07T09:47:03.398Z] 5439.80 IOPS, 21.25 MiB/s 00:18:35.727 Latency(us) 00:18:35.727 [2024-11-07T09:47:03.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.727 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.727 Verification LBA range: start 0x0 length 0x2000 00:18:35.727 TLSTESTn1 : 10.01 5444.51 21.27 0.00 0.00 23474.26 4815.47 31685.23 00:18:35.727 [2024-11-07T09:47:03.398Z] =================================================================================================================== 00:18:35.727 [2024-11-07T09:47:03.398Z] Total : 5444.51 21.27 0.00 0.00 23474.26 4815.47 31685.23 00:18:35.727 { 00:18:35.727 "results": [ 00:18:35.727 { 00:18:35.727 "job": "TLSTESTn1", 00:18:35.727 "core_mask": "0x4", 00:18:35.727 "workload": "verify", 00:18:35.727 "status": "finished", 00:18:35.727 "verify_range": { 00:18:35.727 "start": 0, 00:18:35.727 "length": 8192 00:18:35.728 }, 00:18:35.728 "queue_depth": 128, 00:18:35.728 "io_size": 4096, 00:18:35.728 "runtime": 10.014684, 00:18:35.728 "iops": 5444.505288434463, 00:18:35.728 "mibps": 21.267598782947122, 00:18:35.728 "io_failed": 0, 00:18:35.728 "io_timeout": 0, 00:18:35.728 "avg_latency_us": 23474.25783344696, 00:18:35.728 "min_latency_us": 4815.471304347826, 00:18:35.728 "max_latency_us": 31685.231304347824 00:18:35.728 } 00:18:35.728 ], 00:18:35.728 "core_count": 1 00:18:35.728 } 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2704958 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2704958 ']' 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2704958 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2704958 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2704958' 00:18:35.728 killing process with pid 2704958 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2704958 00:18:35.728 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.728 00:18:35.728 Latency(us) 00:18:35.728 [2024-11-07T09:47:03.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.728 [2024-11-07T09:47:03.399Z] =================================================================================================================== 00:18:35.728 [2024-11-07T09:47:03.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.728 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2704958 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2704713 ']' 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2704713' 00:18:35.986 killing process with pid 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2704713 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2706802 00:18:35.986 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2706802 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2706802 ']' 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:35.987 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.245 [2024-11-07 10:47:03.679983] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:36.245 [2024-11-07 10:47:03.680024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.245 [2024-11-07 10:47:03.746192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.245 [2024-11-07 10:47:03.784388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.245 [2024-11-07 10:47:03.784424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.245 [2024-11-07 10:47:03.784440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.245 [2024-11-07 10:47:03.784448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.245 [2024-11-07 10:47:03.784465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.245 [2024-11-07 10:47:03.785041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.245 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.245 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.245 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:36.245 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.245 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.503 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.503 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KgemzkZaaV 00:18:36.503 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KgemzkZaaV 00:18:36.503 10:47:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.503 [2024-11-07 10:47:04.084428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.503 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.761 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:37.043 [2024-11-07 10:47:04.481466] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.043 [2024-11-07 10:47:04.481680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.043 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:37.043 malloc0 00:18:37.043 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.301 10:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:37.559 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2707077 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2707077 /var/tmp/bdevperf.sock 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2707077 ']' 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.817 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.818 [2024-11-07 10:47:05.289484] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:37.818 [2024-11-07 10:47:05.289532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707077 ] 00:18:37.818 [2024-11-07 10:47:05.352270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.818 [2024-11-07 10:47:05.393277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:37.818 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:38.078 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:38.344 [2024-11-07 10:47:05.840844] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.344 nvme0n1 00:18:38.344 10:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.623 Running I/O for 1 seconds... 00:18:39.628 5355.00 IOPS, 20.92 MiB/s 00:18:39.628 Latency(us) 00:18:39.628 [2024-11-07T09:47:07.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.628 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.628 Verification LBA range: start 0x0 length 0x2000 00:18:39.628 nvme0n1 : 1.03 5347.37 20.89 0.00 0.00 23699.91 6582.09 24960.67 00:18:39.628 [2024-11-07T09:47:07.299Z] =================================================================================================================== 00:18:39.628 [2024-11-07T09:47:07.299Z] Total : 5347.37 20.89 0.00 0.00 23699.91 6582.09 24960.67 00:18:39.628 { 00:18:39.628 "results": [ 00:18:39.628 { 00:18:39.628 "job": "nvme0n1", 00:18:39.628 "core_mask": "0x2", 00:18:39.628 "workload": "verify", 00:18:39.628 "status": "finished", 00:18:39.628 "verify_range": { 00:18:39.628 "start": 0, 00:18:39.628 "length": 8192 00:18:39.628 }, 00:18:39.628 "queue_depth": 128, 00:18:39.628 "io_size": 4096, 00:18:39.628 "runtime": 1.025363, 00:18:39.628 "iops": 5347.37453955331, 00:18:39.628 "mibps": 20.888181795130116, 00:18:39.628 "io_failed": 0, 00:18:39.628 "io_timeout": 0, 00:18:39.628 "avg_latency_us": 23699.90701504254, 00:18:39.628 "min_latency_us": 6582.093913043478, 00:18:39.628 "max_latency_us": 24960.667826086956 00:18:39.628 } 00:18:39.628 ], 00:18:39.628 "core_count": 1 00:18:39.628 } 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2707077 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2707077 ']' 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2707077 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2707077 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2707077' 00:18:39.628 killing process with pid 2707077 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2707077 00:18:39.628 Received shutdown signal, test time was about 1.000000 seconds 00:18:39.628 00:18:39.628 Latency(us) 00:18:39.628 [2024-11-07T09:47:07.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.628 [2024-11-07T09:47:07.299Z] =================================================================================================================== 00:18:39.628 [2024-11-07T09:47:07.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2707077 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2706802 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2706802 ']' 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2706802 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:39.628 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2706802 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2706802' 00:18:39.887 killing process with pid 2706802 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2706802 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2706802 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2707548 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2707548 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2707548 ']' 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:39.887 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.887 [2024-11-07 10:47:07.551847] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:39.888 [2024-11-07 10:47:07.551893] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.146 [2024-11-07 10:47:07.616925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.146 [2024-11-07 10:47:07.652839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.146 [2024-11-07 10:47:07.652880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.146 [2024-11-07 10:47:07.652888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.146 [2024-11-07 10:47:07.652894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.146 [2024-11-07 10:47:07.652899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.146 [2024-11-07 10:47:07.653465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.146 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.146 [2024-11-07 10:47:07.787949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.146 malloc0 00:18:40.405 [2024-11-07 10:47:07.816183] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.405 [2024-11-07 10:47:07.816393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2707571 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2707571 /var/tmp/bdevperf.sock 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2707571 ']' 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.405 10:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.405 [2024-11-07 10:47:07.892977] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:40.405 [2024-11-07 10:47:07.893022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2707571 ] 00:18:40.405 [2024-11-07 10:47:07.955679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.405 [2024-11-07 10:47:07.998174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.662 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:40.662 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:40.662 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KgemzkZaaV 00:18:40.662 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:40.920 [2024-11-07 10:47:08.453360] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.920 nvme0n1 00:18:40.920 10:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.177 Running I/O for 1 seconds... 00:18:42.110 5166.00 IOPS, 20.18 MiB/s 00:18:42.110 Latency(us) 00:18:42.110 [2024-11-07T09:47:09.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.110 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.110 Verification LBA range: start 0x0 length 0x2000 00:18:42.110 nvme0n1 : 1.02 5202.29 20.32 0.00 0.00 24404.97 4786.98 59267.34 00:18:42.110 [2024-11-07T09:47:09.781Z] =================================================================================================================== 00:18:42.110 [2024-11-07T09:47:09.781Z] Total : 5202.29 20.32 0.00 0.00 24404.97 4786.98 59267.34 00:18:42.110 { 00:18:42.110 "results": [ 00:18:42.110 { 00:18:42.110 "job": "nvme0n1", 00:18:42.110 "core_mask": "0x2", 00:18:42.110 "workload": "verify", 00:18:42.110 "status": "finished", 00:18:42.110 "verify_range": { 00:18:42.110 "start": 0, 00:18:42.110 "length": 8192 00:18:42.110 }, 00:18:42.110 "queue_depth": 128, 00:18:42.110 "io_size": 4096, 00:18:42.110 "runtime": 1.017628, 00:18:42.110 "iops": 5202.293962037208, 00:18:42.110 "mibps": 20.321460789207844, 00:18:42.110 "io_failed": 0, 00:18:42.110 "io_timeout": 0, 00:18:42.110 "avg_latency_us": 24404.97462525254, 00:18:42.110 "min_latency_us": 4786.977391304348, 00:18:42.110 "max_latency_us": 59267.33913043478 00:18:42.110 } 00:18:42.110 ], 00:18:42.110 "core_count": 1 00:18:42.110 } 00:18:42.110 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:42.110 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.110 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.388 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.388 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:42.388 "subsystems": [ 00:18:42.388 { 00:18:42.388 "subsystem": "keyring", 00:18:42.388 "config": [ 00:18:42.388 { 00:18:42.388 "method": "keyring_file_add_key", 00:18:42.388 "params": { 00:18:42.388 "name": "key0", 00:18:42.388 "path": "/tmp/tmp.KgemzkZaaV" 00:18:42.388 } 00:18:42.388 } 00:18:42.388 ] 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "subsystem": "iobuf", 00:18:42.388 "config": [ 00:18:42.388 { 00:18:42.388 "method": "iobuf_set_options", 00:18:42.388 "params": { 00:18:42.388 "small_pool_count": 8192, 00:18:42.388 "large_pool_count": 1024, 00:18:42.388 "small_bufsize": 8192, 00:18:42.388 "large_bufsize": 135168, 00:18:42.388 "enable_numa": false 00:18:42.388 } 00:18:42.388 } 00:18:42.388 ] 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "subsystem": "sock", 00:18:42.388 "config": [ 00:18:42.388 { 00:18:42.388 "method": "sock_set_default_impl", 00:18:42.388 "params": { 00:18:42.388 "impl_name": "posix" 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "sock_impl_set_options", 00:18:42.388 "params": { 00:18:42.388 "impl_name": "ssl", 00:18:42.388 "recv_buf_size": 4096, 00:18:42.388 "send_buf_size": 4096, 00:18:42.388 "enable_recv_pipe": true, 00:18:42.388 "enable_quickack": false, 00:18:42.388 "enable_placement_id": 0, 00:18:42.388 "enable_zerocopy_send_server": true, 00:18:42.388 "enable_zerocopy_send_client": false, 00:18:42.388 "zerocopy_threshold": 0, 00:18:42.388 "tls_version": 0, 00:18:42.388 "enable_ktls": false 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "sock_impl_set_options", 00:18:42.388 "params": { 00:18:42.388 "impl_name": "posix", 00:18:42.388 "recv_buf_size": 2097152, 00:18:42.388 "send_buf_size": 2097152, 00:18:42.388 "enable_recv_pipe": true, 00:18:42.388 "enable_quickack": false, 00:18:42.388 "enable_placement_id": 0, 00:18:42.388 "enable_zerocopy_send_server": true, 00:18:42.388 "enable_zerocopy_send_client": false, 00:18:42.388 "zerocopy_threshold": 0, 00:18:42.388 "tls_version": 0, 00:18:42.388 "enable_ktls": false 00:18:42.388 } 00:18:42.388 } 00:18:42.388 ] 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "subsystem": "vmd", 00:18:42.388 "config": [] 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "subsystem": "accel", 00:18:42.388 "config": [ 00:18:42.388 { 00:18:42.388 "method": "accel_set_options", 00:18:42.388 "params": { 00:18:42.388 "small_cache_size": 128, 00:18:42.388 "large_cache_size": 16, 00:18:42.388 "task_count": 2048, 00:18:42.388 "sequence_count": 2048, 00:18:42.388 "buf_count": 2048 00:18:42.388 } 00:18:42.388 } 00:18:42.388 ] 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "subsystem": "bdev", 00:18:42.388 "config": [ 00:18:42.388 { 00:18:42.388 "method": "bdev_set_options", 00:18:42.388 "params": { 00:18:42.388 "bdev_io_pool_size": 65535, 00:18:42.388 "bdev_io_cache_size": 256, 00:18:42.388 "bdev_auto_examine": true, 00:18:42.388 "iobuf_small_cache_size": 128, 00:18:42.388 "iobuf_large_cache_size": 16 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "bdev_raid_set_options", 00:18:42.388 "params": { 00:18:42.388 "process_window_size_kb": 1024, 00:18:42.388 "process_max_bandwidth_mb_sec": 0 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "bdev_iscsi_set_options", 00:18:42.388 "params": { 00:18:42.388 "timeout_sec": 30 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "bdev_nvme_set_options", 00:18:42.388 "params": { 00:18:42.388 "action_on_timeout": "none", 00:18:42.388 "timeout_us": 0, 00:18:42.388 "timeout_admin_us": 0, 00:18:42.388 "keep_alive_timeout_ms": 10000, 00:18:42.388 "arbitration_burst": 0, 00:18:42.388 "low_priority_weight": 0, 00:18:42.388 "medium_priority_weight": 0, 00:18:42.388 "high_priority_weight": 0, 00:18:42.388 "nvme_adminq_poll_period_us": 10000, 00:18:42.388 "nvme_ioq_poll_period_us": 0, 00:18:42.388 "io_queue_requests": 0, 00:18:42.388 "delay_cmd_submit": true, 00:18:42.388 "transport_retry_count": 4, 00:18:42.388 "bdev_retry_count": 3, 00:18:42.388 "transport_ack_timeout": 0, 00:18:42.388 "ctrlr_loss_timeout_sec": 0, 00:18:42.388 "reconnect_delay_sec": 0, 00:18:42.388 "fast_io_fail_timeout_sec": 0, 00:18:42.388 "disable_auto_failback": false, 00:18:42.388 "generate_uuids": false, 00:18:42.388 "transport_tos": 0, 00:18:42.388 "nvme_error_stat": false, 00:18:42.388 "rdma_srq_size": 0, 00:18:42.388 "io_path_stat": false, 00:18:42.388 "allow_accel_sequence": false, 00:18:42.388 "rdma_max_cq_size": 0, 00:18:42.388 "rdma_cm_event_timeout_ms": 0, 00:18:42.388 "dhchap_digests": [ 00:18:42.388 "sha256", 00:18:42.388 "sha384", 00:18:42.388 "sha512" 00:18:42.388 ], 00:18:42.388 "dhchap_dhgroups": [ 00:18:42.388 "null", 00:18:42.388 "ffdhe2048", 00:18:42.388 "ffdhe3072", 00:18:42.388 "ffdhe4096", 00:18:42.388 "ffdhe6144", 00:18:42.388 "ffdhe8192" 00:18:42.388 ] 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "bdev_nvme_set_hotplug", 00:18:42.388 "params": { 00:18:42.388 "period_us": 100000, 00:18:42.388 "enable": false 00:18:42.388 } 00:18:42.388 }, 00:18:42.388 { 00:18:42.388 "method": "bdev_malloc_create", 00:18:42.388 "params": { 00:18:42.388 "name": "malloc0", 00:18:42.388 "num_blocks": 8192, 00:18:42.388 "block_size": 4096, 00:18:42.388 "physical_block_size": 4096, 00:18:42.388 "uuid": "f94514f1-f83a-4af1-87da-352ca5fe259b", 00:18:42.388 "optimal_io_boundary": 0, 00:18:42.388 "md_size": 0, 00:18:42.388 "dif_type": 0, 00:18:42.389 "dif_is_head_of_md": false, 00:18:42.389 "dif_pi_format": 0 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "bdev_wait_for_examine" 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "nbd", 00:18:42.389 "config": [] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "scheduler", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "framework_set_scheduler", 00:18:42.389 "params": { 00:18:42.389 "name": "static" 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "nvmf", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "nvmf_set_config", 00:18:42.389 "params": { 00:18:42.389 "discovery_filter": "match_any", 00:18:42.389 "admin_cmd_passthru": { 00:18:42.389 "identify_ctrlr": false 00:18:42.389 }, 00:18:42.389 "dhchap_digests": [ 00:18:42.389 "sha256", 00:18:42.389 "sha384", 00:18:42.389 "sha512" 00:18:42.389 ], 00:18:42.389 "dhchap_dhgroups": [ 00:18:42.389 "null", 00:18:42.389 "ffdhe2048", 00:18:42.389 "ffdhe3072", 00:18:42.389 "ffdhe4096", 00:18:42.389 "ffdhe6144", 00:18:42.389 "ffdhe8192" 00:18:42.389 ] 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_set_max_subsystems", 00:18:42.389 "params": { 00:18:42.389 "max_subsystems": 1024 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_set_crdt", 00:18:42.389 "params": { 00:18:42.389 "crdt1": 0, 00:18:42.389 "crdt2": 0, 00:18:42.389 "crdt3": 0 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_create_transport", 00:18:42.389 "params": { 00:18:42.389 "trtype": "TCP", 00:18:42.389 "max_queue_depth": 128, 00:18:42.389 "max_io_qpairs_per_ctrlr": 127, 00:18:42.389 "in_capsule_data_size": 4096, 00:18:42.389 "max_io_size": 131072, 00:18:42.389 "io_unit_size": 131072, 00:18:42.389 "max_aq_depth": 128, 00:18:42.389 "num_shared_buffers": 511, 00:18:42.389 "buf_cache_size": 4294967295, 00:18:42.389 "dif_insert_or_strip": false, 00:18:42.389 "zcopy": false, 00:18:42.389 "c2h_success": false, 00:18:42.389 "sock_priority": 0, 00:18:42.389 "abort_timeout_sec": 1, 00:18:42.389 "ack_timeout": 0, 00:18:42.389 "data_wr_pool_size": 0 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_create_subsystem", 00:18:42.389 "params": { 00:18:42.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.389 "allow_any_host": false, 00:18:42.389 "serial_number": "00000000000000000000", 00:18:42.389 "model_number": "SPDK bdev Controller", 00:18:42.389 "max_namespaces": 32, 00:18:42.389 "min_cntlid": 1, 00:18:42.389 "max_cntlid": 65519, 00:18:42.389 "ana_reporting": false 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_subsystem_add_host", 00:18:42.389 "params": { 00:18:42.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.389 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.389 "psk": "key0" 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_subsystem_add_ns", 00:18:42.389 "params": { 00:18:42.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.389 "namespace": { 00:18:42.389 "nsid": 1, 00:18:42.389 "bdev_name": "malloc0", 00:18:42.389 "nguid": "F94514F1F83A4AF187DA352CA5FE259B", 00:18:42.389 "uuid": "f94514f1-f83a-4af1-87da-352ca5fe259b", 00:18:42.389 "no_auto_visible": false 00:18:42.389 } 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "nvmf_subsystem_add_listener", 00:18:42.389 "params": { 00:18:42.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.389 "listen_address": { 00:18:42.389 "trtype": "TCP", 00:18:42.389 "adrfam": "IPv4", 00:18:42.389 "traddr": "10.0.0.2", 00:18:42.389 "trsvcid": "4420" 00:18:42.389 }, 00:18:42.389 "secure_channel": false, 00:18:42.389 "sock_impl": "ssl" 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }' 00:18:42.389 10:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:42.389 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:42.389 "subsystems": [ 00:18:42.389 { 00:18:42.389 "subsystem": "keyring", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "keyring_file_add_key", 00:18:42.389 "params": { 00:18:42.389 "name": "key0", 00:18:42.389 "path": "/tmp/tmp.KgemzkZaaV" 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "iobuf", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "iobuf_set_options", 00:18:42.389 "params": { 00:18:42.389 "small_pool_count": 8192, 00:18:42.389 "large_pool_count": 1024, 00:18:42.389 "small_bufsize": 8192, 00:18:42.389 "large_bufsize": 135168, 00:18:42.389 "enable_numa": false 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "sock", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "sock_set_default_impl", 00:18:42.389 "params": { 00:18:42.389 "impl_name": "posix" 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "sock_impl_set_options", 00:18:42.389 "params": { 00:18:42.389 "impl_name": "ssl", 00:18:42.389 "recv_buf_size": 4096, 00:18:42.389 "send_buf_size": 4096, 00:18:42.389 "enable_recv_pipe": true, 00:18:42.389 "enable_quickack": false, 00:18:42.389 "enable_placement_id": 0, 00:18:42.389 "enable_zerocopy_send_server": true, 00:18:42.389 "enable_zerocopy_send_client": false, 00:18:42.389 "zerocopy_threshold": 0, 00:18:42.389 "tls_version": 0, 00:18:42.389 "enable_ktls": false 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "sock_impl_set_options", 00:18:42.389 "params": { 00:18:42.389 "impl_name": "posix", 00:18:42.389 "recv_buf_size": 2097152, 00:18:42.389 "send_buf_size": 2097152, 00:18:42.389 "enable_recv_pipe": true, 00:18:42.389 "enable_quickack": false, 00:18:42.389 "enable_placement_id": 0, 00:18:42.389 "enable_zerocopy_send_server": true, 00:18:42.389 "enable_zerocopy_send_client": false, 00:18:42.389 "zerocopy_threshold": 0, 00:18:42.389 "tls_version": 0, 00:18:42.389 "enable_ktls": false 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "vmd", 00:18:42.389 "config": [] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "accel", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "accel_set_options", 00:18:42.389 "params": { 00:18:42.389 "small_cache_size": 128, 00:18:42.389 "large_cache_size": 16, 00:18:42.389 "task_count": 2048, 00:18:42.389 "sequence_count": 2048, 00:18:42.389 "buf_count": 2048 00:18:42.389 } 00:18:42.389 } 00:18:42.389 ] 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "subsystem": "bdev", 00:18:42.389 "config": [ 00:18:42.389 { 00:18:42.389 "method": "bdev_set_options", 00:18:42.389 "params": { 00:18:42.389 "bdev_io_pool_size": 65535, 00:18:42.389 "bdev_io_cache_size": 256, 00:18:42.389 "bdev_auto_examine": true, 00:18:42.389 "iobuf_small_cache_size": 128, 00:18:42.389 "iobuf_large_cache_size": 16 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "bdev_raid_set_options", 00:18:42.389 "params": { 00:18:42.389 "process_window_size_kb": 1024, 00:18:42.389 "process_max_bandwidth_mb_sec": 0 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "bdev_iscsi_set_options", 00:18:42.389 "params": { 00:18:42.389 "timeout_sec": 30 00:18:42.389 } 00:18:42.389 }, 00:18:42.389 { 00:18:42.389 "method": "bdev_nvme_set_options", 00:18:42.389 "params": { 00:18:42.389 "action_on_timeout": "none", 00:18:42.389 "timeout_us": 0, 00:18:42.389 "timeout_admin_us": 0, 00:18:42.389 "keep_alive_timeout_ms": 10000, 00:18:42.389 "arbitration_burst": 0, 00:18:42.389 "low_priority_weight": 0, 00:18:42.389 "medium_priority_weight": 0, 00:18:42.389 "high_priority_weight": 0, 00:18:42.389 "nvme_adminq_poll_period_us": 10000, 00:18:42.389 "nvme_ioq_poll_period_us": 0, 00:18:42.390 "io_queue_requests": 512, 00:18:42.390 "delay_cmd_submit": true, 00:18:42.390 "transport_retry_count": 4, 00:18:42.390 "bdev_retry_count": 3, 00:18:42.390 "transport_ack_timeout": 0, 00:18:42.390 "ctrlr_loss_timeout_sec": 0, 00:18:42.390 "reconnect_delay_sec": 0, 00:18:42.390 "fast_io_fail_timeout_sec": 0, 00:18:42.390 "disable_auto_failback": false, 00:18:42.390 "generate_uuids": false, 00:18:42.390 "transport_tos": 0, 00:18:42.390 "nvme_error_stat": false, 00:18:42.390 "rdma_srq_size": 0, 00:18:42.390 "io_path_stat": false, 00:18:42.390 "allow_accel_sequence": false, 00:18:42.390 "rdma_max_cq_size": 0, 00:18:42.390 "rdma_cm_event_timeout_ms": 0, 00:18:42.390 "dhchap_digests": [ 00:18:42.390 "sha256", 00:18:42.390 "sha384", 00:18:42.390 "sha512" 00:18:42.390 ], 00:18:42.390 "dhchap_dhgroups": [ 00:18:42.390 "null", 00:18:42.390 "ffdhe2048", 00:18:42.390 "ffdhe3072", 00:18:42.390 "ffdhe4096", 00:18:42.390 "ffdhe6144", 00:18:42.390 "ffdhe8192" 00:18:42.390 ] 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_nvme_attach_controller", 00:18:42.390 "params": { 00:18:42.390 "name": "nvme0", 00:18:42.390 "trtype": "TCP", 00:18:42.390 "adrfam": "IPv4", 00:18:42.390 "traddr": "10.0.0.2", 00:18:42.390 "trsvcid": "4420", 00:18:42.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.390 "prchk_reftag": false, 00:18:42.390 "prchk_guard": false, 00:18:42.390 "ctrlr_loss_timeout_sec": 0, 00:18:42.390 "reconnect_delay_sec": 0, 00:18:42.390 "fast_io_fail_timeout_sec": 0, 00:18:42.390 "psk": "key0", 00:18:42.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.390 "hdgst": false, 00:18:42.390 "ddgst": false, 00:18:42.390 "multipath": "multipath" 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_nvme_set_hotplug", 00:18:42.390 "params": { 00:18:42.390 "period_us": 100000, 00:18:42.390 "enable": false 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_enable_histogram", 00:18:42.390 "params": { 00:18:42.390 "name": "nvme0n1", 00:18:42.390 "enable": true 00:18:42.390 } 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "method": "bdev_wait_for_examine" 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }, 00:18:42.390 { 00:18:42.390 "subsystem": "nbd", 00:18:42.390 "config": [] 00:18:42.390 } 00:18:42.390 ] 00:18:42.390 }' 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2707571 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2707571 ']' 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2707571 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.390 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2707571 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2707571' 00:18:42.648 killing process with pid 2707571 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2707571 00:18:42.648 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.648 00:18:42.648 Latency(us) 00:18:42.648 [2024-11-07T09:47:10.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.648 [2024-11-07T09:47:10.319Z] =================================================================================================================== 00:18:42.648 [2024-11-07T09:47:10.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2707571 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2707548 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2707548 ']' 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2707548 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2707548 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2707548' 00:18:42.648 killing process with pid 2707548 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2707548 00:18:42.648 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2707548 00:18:42.907 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:42.907 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.907 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.907 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:42.907 "subsystems": [ 00:18:42.907 { 00:18:42.907 "subsystem": "keyring", 00:18:42.907 "config": [ 00:18:42.907 { 00:18:42.907 "method": "keyring_file_add_key", 00:18:42.907 "params": { 00:18:42.907 "name": "key0", 00:18:42.907 "path": "/tmp/tmp.KgemzkZaaV" 00:18:42.907 } 00:18:42.907 } 00:18:42.907 ] 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "subsystem": "iobuf", 00:18:42.907 "config": [ 00:18:42.907 { 00:18:42.907 "method": "iobuf_set_options", 00:18:42.907 "params": { 00:18:42.907 "small_pool_count": 8192, 00:18:42.907 "large_pool_count": 1024, 00:18:42.907 "small_bufsize": 8192, 00:18:42.907 "large_bufsize": 135168, 00:18:42.907 "enable_numa": false 00:18:42.907 } 00:18:42.907 } 00:18:42.907 ] 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "subsystem": "sock", 00:18:42.907 "config": [ 00:18:42.907 { 00:18:42.907 "method": "sock_set_default_impl", 00:18:42.907 "params": { 00:18:42.907 "impl_name": "posix" 00:18:42.907 } 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "method": "sock_impl_set_options", 00:18:42.907 "params": { 00:18:42.907 "impl_name": "ssl", 00:18:42.907 "recv_buf_size": 4096, 00:18:42.907 "send_buf_size": 4096, 00:18:42.907 "enable_recv_pipe": true, 00:18:42.907 "enable_quickack": false, 00:18:42.907 "enable_placement_id": 0, 00:18:42.907 "enable_zerocopy_send_server": true, 00:18:42.907 "enable_zerocopy_send_client": false, 00:18:42.907 "zerocopy_threshold": 0, 00:18:42.907 "tls_version": 0, 00:18:42.907 "enable_ktls": false 00:18:42.907 } 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "method": "sock_impl_set_options", 00:18:42.907 "params": { 00:18:42.907 "impl_name": "posix", 00:18:42.907 "recv_buf_size": 2097152, 00:18:42.907 "send_buf_size": 2097152, 00:18:42.907 "enable_recv_pipe": true, 00:18:42.907 "enable_quickack": false, 00:18:42.907 "enable_placement_id": 0, 00:18:42.907 "enable_zerocopy_send_server": true, 00:18:42.907 "enable_zerocopy_send_client": false, 00:18:42.907 "zerocopy_threshold": 0, 00:18:42.907 "tls_version": 0, 00:18:42.907 "enable_ktls": false 00:18:42.907 } 00:18:42.907 } 00:18:42.907 ] 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "subsystem": "vmd", 00:18:42.907 "config": [] 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "subsystem": "accel", 00:18:42.907 "config": [ 00:18:42.907 { 00:18:42.907 "method": "accel_set_options", 00:18:42.907 "params": { 00:18:42.907 "small_cache_size": 128, 00:18:42.907 "large_cache_size": 16, 00:18:42.907 "task_count": 2048, 00:18:42.907 "sequence_count": 2048, 00:18:42.907 "buf_count": 2048 00:18:42.907 } 00:18:42.907 } 00:18:42.907 ] 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "subsystem": "bdev", 00:18:42.907 "config": [ 00:18:42.907 { 00:18:42.907 "method": "bdev_set_options", 00:18:42.907 "params": { 00:18:42.907 "bdev_io_pool_size": 65535, 00:18:42.907 "bdev_io_cache_size": 256, 00:18:42.907 "bdev_auto_examine": true, 00:18:42.907 "iobuf_small_cache_size": 128, 00:18:42.907 "iobuf_large_cache_size": 16 00:18:42.907 } 00:18:42.907 }, 00:18:42.907 { 00:18:42.907 "method": "bdev_raid_set_options", 00:18:42.908 "params": { 00:18:42.908 "process_window_size_kb": 1024, 00:18:42.908 "process_max_bandwidth_mb_sec": 0 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "bdev_iscsi_set_options", 00:18:42.908 "params": { 00:18:42.908 "timeout_sec": 30 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "bdev_nvme_set_options", 00:18:42.908 "params": { 00:18:42.908 "action_on_timeout": "none", 00:18:42.908 "timeout_us": 0, 00:18:42.908 "timeout_admin_us": 0, 00:18:42.908 "keep_alive_timeout_ms": 10000, 00:18:42.908 "arbitration_burst": 0, 00:18:42.908 "low_priority_weight": 0, 00:18:42.908 "medium_priority_weight": 0, 00:18:42.908 "high_priority_weight": 0, 00:18:42.908 "nvme_adminq_poll_period_us": 10000, 00:18:42.908 "nvme_ioq_poll_period_us": 0, 00:18:42.908 "io_queue_requests": 0, 00:18:42.908 "delay_cmd_submit": true, 00:18:42.908 "transport_retry_count": 4, 00:18:42.908 "bdev_retry_count": 3, 00:18:42.908 "transport_ack_timeout": 0, 00:18:42.908 "ctrlr_loss_timeout_sec": 0, 00:18:42.908 "reconnect_delay_sec": 0, 00:18:42.908 "fast_io_fail_timeout_sec": 0, 00:18:42.908 "disable_auto_failback": false, 00:18:42.908 "generate_uuids": false, 00:18:42.908 "transport_tos": 0, 00:18:42.908 "nvme_error_stat": false, 00:18:42.908 "rdma_srq_size": 0, 00:18:42.908 "io_path_stat": false, 00:18:42.908 "allow_accel_sequence": false, 00:18:42.908 "rdma_max_cq_size": 0, 00:18:42.908 "rdma_cm_event_timeout_ms": 0, 00:18:42.908 "dhchap_digests": [ 00:18:42.908 "sha256", 00:18:42.908 "sha384", 00:18:42.908 "sha512" 00:18:42.908 ], 00:18:42.908 "dhchap_dhgroups": [ 00:18:42.908 "null", 00:18:42.908 "ffdhe2048", 00:18:42.908 "ffdhe3072", 00:18:42.908 "ffdhe4096", 00:18:42.908 "ffdhe6144", 00:18:42.908 "ffdhe8192" 00:18:42.908 ] 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "bdev_nvme_set_hotplug", 00:18:42.908 "params": { 00:18:42.908 "period_us": 100000, 00:18:42.908 "enable": false 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "bdev_malloc_create", 00:18:42.908 "params": { 00:18:42.908 "name": "malloc0", 00:18:42.908 "num_blocks": 8192, 00:18:42.908 "block_size": 4096, 00:18:42.908 "physical_block_size": 4096, 00:18:42.908 "uuid": "f94514f1-f83a-4af1-87da-352ca5fe259b", 00:18:42.908 "optimal_io_boundary": 0, 00:18:42.908 "md_size": 0, 00:18:42.908 "dif_type": 0, 00:18:42.908 "dif_is_head_of_md": false, 00:18:42.908 "dif_pi_format": 0 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "bdev_wait_for_examine" 00:18:42.908 } 00:18:42.908 ] 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "subsystem": "nbd", 00:18:42.908 "config": [] 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "subsystem": "scheduler", 00:18:42.908 "config": [ 00:18:42.908 { 00:18:42.908 "method": "framework_set_scheduler", 00:18:42.908 "params": { 00:18:42.908 "name": "static" 00:18:42.908 } 00:18:42.908 } 00:18:42.908 ] 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "subsystem": "nvmf", 00:18:42.908 "config": [ 00:18:42.908 { 00:18:42.908 "method": "nvmf_set_config", 00:18:42.908 "params": { 00:18:42.908 "discovery_filter": "match_any", 00:18:42.908 "admin_cmd_passthru": { 00:18:42.908 "identify_ctrlr": false 00:18:42.908 }, 00:18:42.908 "dhchap_digests": [ 00:18:42.908 "sha256", 00:18:42.908 "sha384", 00:18:42.908 "sha512" 00:18:42.908 ], 00:18:42.908 "dhchap_dhgroups": [ 00:18:42.908 "null", 00:18:42.908 "ffdhe2048", 00:18:42.908 "ffdhe3072", 00:18:42.908 "ffdhe4096", 00:18:42.908 "ffdhe6144", 00:18:42.908 "ffdhe8192" 00:18:42.908 ] 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_set_max_subsystems", 00:18:42.908 "params": { 00:18:42.908 "max_subsystems": 1024 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_set_crdt", 00:18:42.908 "params": { 00:18:42.908 "crdt1": 0, 00:18:42.908 "crdt2": 0, 00:18:42.908 "crdt3": 0 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_create_transport", 00:18:42.908 "params": { 00:18:42.908 "trtype": "TCP", 00:18:42.908 "max_queue_depth": 128, 00:18:42.908 "max_io_qpairs_per_ctrlr": 127, 00:18:42.908 "in_capsule_data_size": 4096, 00:18:42.908 "max_io_size": 131072, 00:18:42.908 "io_unit_size": 131072, 00:18:42.908 "max_aq_depth": 128, 00:18:42.908 "num_shared_buffers": 511, 00:18:42.908 "buf_cache_size": 4294967295, 00:18:42.908 "dif_insert_or_strip": false, 00:18:42.908 "zcopy": false, 00:18:42.908 "c2h_success": false, 00:18:42.908 "sock_priority": 0, 00:18:42.908 "abort_timeout_sec": 1, 00:18:42.908 "ack_timeout": 0, 00:18:42.908 "data_wr_pool_size": 0 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_create_subsystem", 00:18:42.908 "params": { 00:18:42.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.908 "allow_any_host": false, 00:18:42.908 "serial_number": "00000000000000000000", 00:18:42.908 "model_number": "SPDK bdev Controller", 00:18:42.908 "max_namespaces": 32, 00:18:42.908 "min_cntlid": 1, 00:18:42.908 "max_cntlid": 65519, 00:18:42.908 "ana_reporting": false 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_subsystem_add_host", 00:18:42.908 "params": { 00:18:42.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.908 "host": "nqn.2016-06.io.spdk:host1", 00:18:42.908 "psk": "key0" 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_subsystem_add_ns", 00:18:42.908 "params": { 00:18:42.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.908 "namespace": { 00:18:42.908 "nsid": 1, 00:18:42.908 "bdev_name": "malloc0", 00:18:42.908 "nguid": "F94514F1F83A4AF187DA352CA5FE259B", 00:18:42.908 "uuid": "f94514f1-f83a-4af1-87da-352ca5fe259b", 00:18:42.908 "no_auto_visible": false 00:18:42.908 } 00:18:42.908 } 00:18:42.908 }, 00:18:42.908 { 00:18:42.908 "method": "nvmf_subsystem_add_listener", 00:18:42.908 "params": { 00:18:42.908 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.908 "listen_address": { 00:18:42.908 "trtype": "TCP", 00:18:42.908 "adrfam": "IPv4", 00:18:42.908 "traddr": "10.0.0.2", 00:18:42.908 "trsvcid": "4420" 00:18:42.908 }, 00:18:42.908 "secure_channel": false, 00:18:42.908 "sock_impl": "ssl" 00:18:42.908 } 00:18:42.908 } 00:18:42.908 ] 00:18:42.908 } 00:18:42.908 ] 00:18:42.908 }' 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2708045 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2708045 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2708045 ']' 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:42.908 10:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.908 [2024-11-07 10:47:10.532835] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:42.908 [2024-11-07 10:47:10.532882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.167 [2024-11-07 10:47:10.598292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.167 [2024-11-07 10:47:10.635644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.167 [2024-11-07 10:47:10.635681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.167 [2024-11-07 10:47:10.635689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.167 [2024-11-07 10:47:10.635697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.167 [2024-11-07 10:47:10.635702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.167 [2024-11-07 10:47:10.636303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.425 [2024-11-07 10:47:10.850365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.425 [2024-11-07 10:47:10.882376] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.425 [2024-11-07 10:47:10.882607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2708178 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2708178 /var/tmp/bdevperf.sock 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2708178 ']' 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.992 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:43.992 "subsystems": [ 00:18:43.992 { 00:18:43.992 "subsystem": "keyring", 00:18:43.992 "config": [ 00:18:43.992 { 00:18:43.992 "method": "keyring_file_add_key", 00:18:43.992 "params": { 00:18:43.992 "name": "key0", 00:18:43.992 "path": "/tmp/tmp.KgemzkZaaV" 00:18:43.992 } 00:18:43.992 } 00:18:43.992 ] 00:18:43.992 }, 00:18:43.992 { 00:18:43.992 "subsystem": "iobuf", 00:18:43.992 "config": [ 00:18:43.992 { 00:18:43.992 "method": "iobuf_set_options", 00:18:43.992 "params": { 00:18:43.992 "small_pool_count": 8192, 00:18:43.992 "large_pool_count": 1024, 00:18:43.992 "small_bufsize": 8192, 00:18:43.992 "large_bufsize": 135168, 00:18:43.992 "enable_numa": false 00:18:43.992 } 00:18:43.992 } 00:18:43.992 ] 00:18:43.992 }, 00:18:43.992 { 00:18:43.992 "subsystem": "sock", 00:18:43.992 "config": [ 00:18:43.992 { 00:18:43.992 "method": "sock_set_default_impl", 00:18:43.992 "params": { 00:18:43.992 "impl_name": "posix" 00:18:43.992 } 00:18:43.992 }, 00:18:43.992 { 00:18:43.992 "method": "sock_impl_set_options", 00:18:43.993 "params": { 00:18:43.993 "impl_name": "ssl", 00:18:43.993 "recv_buf_size": 4096, 00:18:43.993 "send_buf_size": 4096, 00:18:43.993 "enable_recv_pipe": true, 00:18:43.993 "enable_quickack": false, 00:18:43.993 "enable_placement_id": 0, 00:18:43.993 "enable_zerocopy_send_server": true, 00:18:43.993 "enable_zerocopy_send_client": false, 00:18:43.993 "zerocopy_threshold": 0, 00:18:43.993 "tls_version": 0, 00:18:43.993 "enable_ktls": false 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "sock_impl_set_options", 00:18:43.993 "params": { 00:18:43.993 "impl_name": "posix", 00:18:43.993 "recv_buf_size": 2097152, 00:18:43.993 "send_buf_size": 2097152, 00:18:43.993 "enable_recv_pipe": true, 00:18:43.993 "enable_quickack": false, 00:18:43.993 "enable_placement_id": 0, 00:18:43.993 "enable_zerocopy_send_server": true, 00:18:43.993 "enable_zerocopy_send_client": false, 00:18:43.993 "zerocopy_threshold": 0, 00:18:43.993 "tls_version": 0, 00:18:43.993 "enable_ktls": false 00:18:43.993 } 00:18:43.993 } 00:18:43.993 ] 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "subsystem": "vmd", 00:18:43.993 "config": [] 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "subsystem": "accel", 00:18:43.993 "config": [ 00:18:43.993 { 00:18:43.993 "method": "accel_set_options", 00:18:43.993 "params": { 00:18:43.993 "small_cache_size": 128, 00:18:43.993 "large_cache_size": 16, 00:18:43.993 "task_count": 2048, 00:18:43.993 "sequence_count": 2048, 00:18:43.993 "buf_count": 2048 00:18:43.993 } 00:18:43.993 } 00:18:43.993 ] 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "subsystem": "bdev", 00:18:43.993 "config": [ 00:18:43.993 { 00:18:43.993 "method": "bdev_set_options", 00:18:43.993 "params": { 00:18:43.993 "bdev_io_pool_size": 65535, 00:18:43.993 "bdev_io_cache_size": 256, 00:18:43.993 "bdev_auto_examine": true, 00:18:43.993 "iobuf_small_cache_size": 128, 00:18:43.993 "iobuf_large_cache_size": 16 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_raid_set_options", 00:18:43.993 "params": { 00:18:43.993 "process_window_size_kb": 1024, 00:18:43.993 "process_max_bandwidth_mb_sec": 0 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_iscsi_set_options", 00:18:43.993 "params": { 00:18:43.993 "timeout_sec": 30 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_nvme_set_options", 00:18:43.993 "params": { 00:18:43.993 "action_on_timeout": "none", 00:18:43.993 "timeout_us": 0, 00:18:43.993 "timeout_admin_us": 0, 00:18:43.993 "keep_alive_timeout_ms": 10000, 00:18:43.993 "arbitration_burst": 0, 00:18:43.993 "low_priority_weight": 0, 00:18:43.993 "medium_priority_weight": 0, 00:18:43.993 "high_priority_weight": 0, 00:18:43.993 "nvme_adminq_poll_period_us": 10000, 00:18:43.993 "nvme_ioq_poll_period_us": 0, 00:18:43.993 "io_queue_requests": 512, 00:18:43.993 "delay_cmd_submit": true, 00:18:43.993 "transport_retry_count": 4, 00:18:43.993 "bdev_retry_count": 3, 00:18:43.993 "transport_ack_timeout": 0, 00:18:43.993 "ctrlr_loss_timeout_sec": 0, 00:18:43.993 "reconnect_delay_sec": 0, 00:18:43.993 "fast_io_fail_timeout_sec": 0, 00:18:43.993 "disable_auto_failback": false, 00:18:43.993 "generate_uuids": false, 00:18:43.993 "transport_tos": 0, 00:18:43.993 "nvme_error_stat": false, 00:18:43.993 "rdma_srq_size": 0, 00:18:43.993 "io_path_stat": false, 00:18:43.993 "allow_accel_sequence": false, 00:18:43.993 "rdma_max_cq_size": 0, 00:18:43.993 "rdma_cm_event_timeout_ms": 0, 00:18:43.993 "dhchap_digests": [ 00:18:43.993 "sha256", 00:18:43.993 "sha384", 00:18:43.993 "sha512" 00:18:43.993 ], 00:18:43.993 "dhchap_dhgroups": [ 00:18:43.993 "null", 00:18:43.993 "ffdhe2048", 00:18:43.993 "ffdhe3072", 00:18:43.993 "ffdhe4096", 00:18:43.993 "ffdhe6144", 00:18:43.993 "ffdhe8192" 00:18:43.993 ] 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_nvme_attach_controller", 00:18:43.993 "params": { 00:18:43.993 "name": "nvme0", 00:18:43.993 "trtype": "TCP", 00:18:43.993 "adrfam": "IPv4", 00:18:43.993 "traddr": "10.0.0.2", 00:18:43.993 "trsvcid": "4420", 00:18:43.993 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.993 "prchk_reftag": false, 00:18:43.993 "prchk_guard": false, 00:18:43.993 "ctrlr_loss_timeout_sec": 0, 00:18:43.993 "reconnect_delay_sec": 0, 00:18:43.993 "fast_io_fail_timeout_sec": 0, 00:18:43.993 "psk": "key0", 00:18:43.993 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.993 "hdgst": false, 00:18:43.993 "ddgst": false, 00:18:43.993 "multipath": "multipath" 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_nvme_set_hotplug", 00:18:43.993 "params": { 00:18:43.993 "period_us": 100000, 00:18:43.993 "enable": false 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_enable_histogram", 00:18:43.993 "params": { 00:18:43.993 "name": "nvme0n1", 00:18:43.993 "enable": true 00:18:43.993 } 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "method": "bdev_wait_for_examine" 00:18:43.993 } 00:18:43.993 ] 00:18:43.993 }, 00:18:43.993 { 00:18:43.993 "subsystem": "nbd", 00:18:43.993 "config": [] 00:18:43.993 } 00:18:43.993 ] 00:18:43.993 }' 00:18:43.993 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:43.993 10:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.993 [2024-11-07 10:47:11.463855] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:43.993 [2024-11-07 10:47:11.463904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2708178 ] 00:18:43.993 [2024-11-07 10:47:11.526668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.993 [2024-11-07 10:47:11.569366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.251 [2024-11-07 10:47:11.721842] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.817 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:44.817 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:44.817 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:44.817 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:45.074 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.074 10:47:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.074 Running I/O for 1 seconds... 00:18:46.007 5345.00 IOPS, 20.88 MiB/s 00:18:46.007 Latency(us) 00:18:46.007 [2024-11-07T09:47:13.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.007 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.007 Verification LBA range: start 0x0 length 0x2000 00:18:46.007 nvme0n1 : 1.02 5390.27 21.06 0.00 0.00 23553.80 6582.09 28151.99 00:18:46.007 [2024-11-07T09:47:13.678Z] =================================================================================================================== 00:18:46.007 [2024-11-07T09:47:13.678Z] Total : 5390.27 21.06 0.00 0.00 23553.80 6582.09 28151.99 00:18:46.007 { 00:18:46.007 "results": [ 00:18:46.007 { 00:18:46.007 "job": "nvme0n1", 00:18:46.007 "core_mask": "0x2", 00:18:46.007 "workload": "verify", 00:18:46.007 "status": "finished", 00:18:46.007 "verify_range": { 00:18:46.007 "start": 0, 00:18:46.007 "length": 8192 00:18:46.007 }, 00:18:46.007 "queue_depth": 128, 00:18:46.007 "io_size": 4096, 00:18:46.008 "runtime": 1.015534, 00:18:46.008 "iops": 5390.267583360084, 00:18:46.008 "mibps": 21.05573274750033, 00:18:46.008 "io_failed": 0, 00:18:46.008 "io_timeout": 0, 00:18:46.008 "avg_latency_us": 23553.79518419088, 00:18:46.008 "min_latency_us": 6582.093913043478, 00:18:46.008 "max_latency_us": 28151.98608695652 00:18:46.008 } 00:18:46.008 ], 00:18:46.008 "core_count": 1 00:18:46.008 } 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:18:46.008 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:46.008 nvmf_trace.0 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2708178 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2708178 ']' 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2708178 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2708178 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2708178' 00:18:46.266 killing process with pid 2708178 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2708178 00:18:46.266 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.266 00:18:46.266 Latency(us) 00:18:46.266 [2024-11-07T09:47:13.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.266 [2024-11-07T09:47:13.937Z] =================================================================================================================== 00:18:46.266 [2024-11-07T09:47:13.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2708178 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:46.266 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:46.524 rmmod nvme_tcp 00:18:46.524 rmmod nvme_fabrics 00:18:46.524 rmmod nvme_keyring 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2708045 ']' 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2708045 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2708045 ']' 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2708045 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.524 10:47:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2708045 00:18:46.524 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:46.524 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:46.524 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2708045' 00:18:46.524 killing process with pid 2708045 00:18:46.524 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2708045 00:18:46.524 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2708045 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.782 10:47:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.QBgCjCH6QZ /tmp/tmp.INp14GmSFC /tmp/tmp.KgemzkZaaV 00:18:48.683 00:18:48.683 real 1m18.726s 00:18:48.683 user 2m0.268s 00:18:48.683 sys 0m30.429s 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.683 ************************************ 00:18:48.683 END TEST nvmf_tls 00:18:48.683 ************************************ 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:48.683 10:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.942 ************************************ 00:18:48.942 START TEST nvmf_fips 00:18:48.942 ************************************ 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:48.942 * Looking for test storage... 00:18:48.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.942 --rc genhtml_branch_coverage=1 00:18:48.942 --rc genhtml_function_coverage=1 00:18:48.942 --rc genhtml_legend=1 00:18:48.942 --rc geninfo_all_blocks=1 00:18:48.942 --rc geninfo_unexecuted_blocks=1 00:18:48.942 00:18:48.942 ' 00:18:48.942 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:48.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.942 --rc genhtml_branch_coverage=1 00:18:48.942 --rc genhtml_function_coverage=1 00:18:48.943 --rc genhtml_legend=1 00:18:48.943 --rc geninfo_all_blocks=1 00:18:48.943 --rc geninfo_unexecuted_blocks=1 00:18:48.943 00:18:48.943 ' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.943 --rc genhtml_branch_coverage=1 00:18:48.943 --rc genhtml_function_coverage=1 00:18:48.943 --rc genhtml_legend=1 00:18:48.943 --rc geninfo_all_blocks=1 00:18:48.943 --rc geninfo_unexecuted_blocks=1 00:18:48.943 00:18:48.943 ' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.943 --rc genhtml_branch_coverage=1 00:18:48.943 --rc genhtml_function_coverage=1 00:18:48.943 --rc genhtml_legend=1 00:18:48.943 --rc geninfo_all_blocks=1 00:18:48.943 --rc geninfo_unexecuted_blocks=1 00:18:48.943 00:18:48.943 ' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:48.943 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:48.944 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:49.202 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:49.203 Error setting digest 00:18:49.203 4002411A8A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:49.203 4002411A8A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:49.203 10:47:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.471 10:47:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:54.471 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:54.471 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:54.471 Found net devices under 0000:86:00.0: cvl_0_0 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.471 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:54.472 Found net devices under 0000:86:00.1: cvl_0_1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.472 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:54.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:18:54.731 00:18:54.731 --- 10.0.0.2 ping statistics --- 00:18:54.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.731 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:18:54.731 00:18:54.731 --- 10.0.0.1 ping statistics --- 00:18:54.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.731 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2712091 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2712091 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2712091 ']' 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.731 10:47:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:54.731 [2024-11-07 10:47:22.358054] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:54.731 [2024-11-07 10:47:22.358107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.989 [2024-11-07 10:47:22.424317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.989 [2024-11-07 10:47:22.466466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.989 [2024-11-07 10:47:22.466497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.989 [2024-11-07 10:47:22.466504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.989 [2024-11-07 10:47:22.466510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.989 [2024-11-07 10:47:22.466516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.989 [2024-11-07 10:47:22.466967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.tnW 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.tnW 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.tnW 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.tnW 00:18:55.556 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:55.814 [2024-11-07 10:47:23.392179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.814 [2024-11-07 10:47:23.408186] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.814 [2024-11-07 10:47:23.408381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.814 malloc0 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2712340 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2712340 /var/tmp/bdevperf.sock 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2712340 ']' 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:55.814 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.072 [2024-11-07 10:47:23.528088] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:18:56.072 [2024-11-07 10:47:23.528137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2712340 ] 00:18:56.072 [2024-11-07 10:47:23.585606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.072 [2024-11-07 10:47:23.626368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.072 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:56.072 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:18:56.072 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.tnW 00:18:56.330 10:47:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:56.589 [2024-11-07 10:47:24.065638] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.589 TLSTESTn1 00:18:56.589 10:47:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.589 Running I/O for 10 seconds... 00:18:58.897 5214.00 IOPS, 20.37 MiB/s [2024-11-07T09:47:27.503Z] 5328.50 IOPS, 20.81 MiB/s [2024-11-07T09:47:28.436Z] 5384.00 IOPS, 21.03 MiB/s [2024-11-07T09:47:29.370Z] 5401.25 IOPS, 21.10 MiB/s [2024-11-07T09:47:30.303Z] 5427.00 IOPS, 21.20 MiB/s [2024-11-07T09:47:31.677Z] 5437.67 IOPS, 21.24 MiB/s [2024-11-07T09:47:32.611Z] 5428.00 IOPS, 21.20 MiB/s [2024-11-07T09:47:33.545Z] 5439.50 IOPS, 21.25 MiB/s [2024-11-07T09:47:34.479Z] 5442.22 IOPS, 21.26 MiB/s [2024-11-07T09:47:34.479Z] 5447.50 IOPS, 21.28 MiB/s 00:19:06.808 Latency(us) 00:19:06.808 [2024-11-07T09:47:34.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.808 Verification LBA range: start 0x0 length 0x2000 00:19:06.808 TLSTESTn1 : 10.01 5452.13 21.30 0.00 0.00 23441.31 6240.17 32824.99 00:19:06.808 [2024-11-07T09:47:34.479Z] =================================================================================================================== 00:19:06.808 [2024-11-07T09:47:34.479Z] Total : 5452.13 21.30 0.00 0.00 23441.31 6240.17 32824.99 00:19:06.808 { 00:19:06.808 "results": [ 00:19:06.808 { 00:19:06.808 "job": "TLSTESTn1", 00:19:06.808 "core_mask": "0x4", 00:19:06.808 "workload": "verify", 00:19:06.808 "status": "finished", 00:19:06.808 "verify_range": { 00:19:06.808 "start": 0, 00:19:06.808 "length": 8192 00:19:06.808 }, 00:19:06.808 "queue_depth": 128, 00:19:06.808 "io_size": 4096, 00:19:06.808 "runtime": 10.014623, 00:19:06.808 "iops": 5452.1273541699975, 00:19:06.808 "mibps": 21.297372477226553, 00:19:06.808 "io_failed": 0, 00:19:06.808 "io_timeout": 0, 00:19:06.808 "avg_latency_us": 23441.306534694777, 00:19:06.808 "min_latency_us": 6240.166956521739, 00:19:06.808 "max_latency_us": 32824.98782608696 00:19:06.808 } 00:19:06.808 ], 00:19:06.808 "core_count": 1 00:19:06.808 } 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:06.808 nvmf_trace.0 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2712340 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2712340 ']' 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2712340 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2712340 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2712340' 00:19:06.808 killing process with pid 2712340 00:19:06.808 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2712340 00:19:06.808 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.808 00:19:06.808 Latency(us) 00:19:06.808 [2024-11-07T09:47:34.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.809 [2024-11-07T09:47:34.480Z] =================================================================================================================== 00:19:06.809 [2024-11-07T09:47:34.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.809 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2712340 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.067 rmmod nvme_tcp 00:19:07.067 rmmod nvme_fabrics 00:19:07.067 rmmod nvme_keyring 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2712091 ']' 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2712091 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2712091 ']' 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2712091 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2712091 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2712091' 00:19:07.067 killing process with pid 2712091 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2712091 00:19:07.067 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2712091 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.326 10:47:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.tnW 00:19:09.857 00:19:09.857 real 0m20.599s 00:19:09.857 user 0m21.887s 00:19:09.857 sys 0m9.260s 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 ************************************ 00:19:09.857 END TEST nvmf_fips 00:19:09.857 ************************************ 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:09.857 10:47:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 ************************************ 00:19:09.857 START TEST nvmf_control_msg_list 00:19:09.857 ************************************ 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:09.857 * Looking for test storage... 00:19:09.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:09.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.857 --rc genhtml_branch_coverage=1 00:19:09.857 --rc genhtml_function_coverage=1 00:19:09.857 --rc genhtml_legend=1 00:19:09.857 --rc geninfo_all_blocks=1 00:19:09.857 --rc geninfo_unexecuted_blocks=1 00:19:09.857 00:19:09.857 ' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:09.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.857 --rc genhtml_branch_coverage=1 00:19:09.857 --rc genhtml_function_coverage=1 00:19:09.857 --rc genhtml_legend=1 00:19:09.857 --rc geninfo_all_blocks=1 00:19:09.857 --rc geninfo_unexecuted_blocks=1 00:19:09.857 00:19:09.857 ' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:09.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.857 --rc genhtml_branch_coverage=1 00:19:09.857 --rc genhtml_function_coverage=1 00:19:09.857 --rc genhtml_legend=1 00:19:09.857 --rc geninfo_all_blocks=1 00:19:09.857 --rc geninfo_unexecuted_blocks=1 00:19:09.857 00:19:09.857 ' 00:19:09.857 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:09.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.857 --rc genhtml_branch_coverage=1 00:19:09.857 --rc genhtml_function_coverage=1 00:19:09.858 --rc genhtml_legend=1 00:19:09.858 --rc geninfo_all_blocks=1 00:19:09.858 --rc geninfo_unexecuted_blocks=1 00:19:09.858 00:19:09.858 ' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:09.858 10:47:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:15.128 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:15.128 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.128 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:15.129 Found net devices under 0000:86:00.0: cvl_0_0 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:15.129 Found net devices under 0000:86:00.1: cvl_0_1 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:15.129 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:15.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:19:15.387 00:19:15.387 --- 10.0.0.2 ping statistics --- 00:19:15.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.387 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:15.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:15.387 00:19:15.387 --- 10.0.0.1 ping statistics --- 00:19:15.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.387 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:15.387 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2717698 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2717698 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2717698 ']' 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:15.388 10:47:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.388 [2024-11-07 10:47:43.006909] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:15.388 [2024-11-07 10:47:43.006953] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.646 [2024-11-07 10:47:43.074004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.646 [2024-11-07 10:47:43.115558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.646 [2024-11-07 10:47:43.115591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.646 [2024-11-07 10:47:43.115600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.646 [2024-11-07 10:47:43.115607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.646 [2024-11-07 10:47:43.115612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.646 [2024-11-07 10:47:43.116163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 [2024-11-07 10:47:43.263732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 Malloc0 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.646 [2024-11-07 10:47:43.300094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2717728 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.646 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2717729 00:19:15.647 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.647 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2717730 00:19:15.647 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2717728 00:19:15.647 10:47:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:15.905 [2024-11-07 10:47:43.358461] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.905 [2024-11-07 10:47:43.368536] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:15.905 [2024-11-07 10:47:43.368689] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:16.838 Initializing NVMe Controllers 00:19:16.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:16.839 Initialization complete. Launching workers. 00:19:16.839 ======================================================== 00:19:16.839 Latency(us) 00:19:16.839 Device Information : IOPS MiB/s Average min max 00:19:16.839 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6245.00 24.39 159.78 137.88 374.05 00:19:16.839 ======================================================== 00:19:16.839 Total : 6245.00 24.39 159.78 137.88 374.05 00:19:16.839 00:19:16.839 Initializing NVMe Controllers 00:19:16.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:16.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:16.839 Initialization complete. Launching workers. 00:19:16.839 ======================================================== 00:19:16.839 Latency(us) 00:19:16.839 Device Information : IOPS MiB/s Average min max 00:19:16.839 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41229.65 40817.28 42209.24 00:19:16.839 ======================================================== 00:19:16.839 Total : 25.00 0.10 41229.65 40817.28 42209.24 00:19:16.839 00:19:16.839 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2717729 00:19:16.839 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2717730 00:19:17.097 Initializing NVMe Controllers 00:19:17.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:17.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:17.097 Initialization complete. Launching workers. 00:19:17.097 ======================================================== 00:19:17.097 Latency(us) 00:19:17.097 Device Information : IOPS MiB/s Average min max 00:19:17.097 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6150.00 24.02 162.22 151.58 407.76 00:19:17.097 ======================================================== 00:19:17.097 Total : 6150.00 24.02 162.22 151.58 407.76 00:19:17.097 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.097 rmmod nvme_tcp 00:19:17.097 rmmod nvme_fabrics 00:19:17.097 rmmod nvme_keyring 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2717698 ']' 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2717698 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2717698 ']' 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2717698 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2717698 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2717698' 00:19:17.097 killing process with pid 2717698 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2717698 00:19:17.097 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2717698 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:17.356 10:47:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.260 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:19.260 00:19:19.260 real 0m9.860s 00:19:19.260 user 0m6.432s 00:19:19.260 sys 0m5.249s 00:19:19.260 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:19.260 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:19.260 ************************************ 00:19:19.260 END TEST nvmf_control_msg_list 00:19:19.260 ************************************ 00:19:19.260 10:47:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:19.520 10:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:19.520 10:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:19.520 10:47:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:19.520 ************************************ 00:19:19.520 START TEST nvmf_wait_for_buf 00:19:19.520 ************************************ 00:19:19.520 10:47:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:19.520 * Looking for test storage... 00:19:19.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.520 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:19.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.520 --rc genhtml_branch_coverage=1 00:19:19.521 --rc genhtml_function_coverage=1 00:19:19.521 --rc genhtml_legend=1 00:19:19.521 --rc geninfo_all_blocks=1 00:19:19.521 --rc geninfo_unexecuted_blocks=1 00:19:19.521 00:19:19.521 ' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.521 --rc genhtml_branch_coverage=1 00:19:19.521 --rc genhtml_function_coverage=1 00:19:19.521 --rc genhtml_legend=1 00:19:19.521 --rc geninfo_all_blocks=1 00:19:19.521 --rc geninfo_unexecuted_blocks=1 00:19:19.521 00:19:19.521 ' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.521 --rc genhtml_branch_coverage=1 00:19:19.521 --rc genhtml_function_coverage=1 00:19:19.521 --rc genhtml_legend=1 00:19:19.521 --rc geninfo_all_blocks=1 00:19:19.521 --rc geninfo_unexecuted_blocks=1 00:19:19.521 00:19:19.521 ' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:19.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.521 --rc genhtml_branch_coverage=1 00:19:19.521 --rc genhtml_function_coverage=1 00:19:19.521 --rc genhtml_legend=1 00:19:19.521 --rc geninfo_all_blocks=1 00:19:19.521 --rc geninfo_unexecuted_blocks=1 00:19:19.521 00:19:19.521 ' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:19.521 10:47:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:26.107 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:26.107 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:26.107 Found net devices under 0000:86:00.0: cvl_0_0 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:26.107 Found net devices under 0000:86:00.1: cvl_0_1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.107 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:19:26.107 00:19:26.107 --- 10.0.0.2 ping statistics --- 00:19:26.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.108 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:26.108 00:19:26.108 --- 10.0.0.1 ping statistics --- 00:19:26.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.108 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2721479 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2721479 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2721479 ']' 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:26.108 10:47:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 [2024-11-07 10:47:52.921753] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:26.108 [2024-11-07 10:47:52.921797] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.108 [2024-11-07 10:47:52.987927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.108 [2024-11-07 10:47:53.027398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.108 [2024-11-07 10:47:53.027440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.108 [2024-11-07 10:47:53.027448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.108 [2024-11-07 10:47:53.027454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.108 [2024-11-07 10:47:53.027474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.108 [2024-11-07 10:47:53.028042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 Malloc0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 [2024-11-07 10:47:53.197689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.108 [2024-11-07 10:47:53.221888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.108 10:47:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:26.108 [2024-11-07 10:47:53.296520] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.044 Initializing NVMe Controllers 00:19:27.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.044 Initialization complete. Launching workers. 00:19:27.044 ======================================================== 00:19:27.044 Latency(us) 00:19:27.044 Device Information : IOPS MiB/s Average min max 00:19:27.044 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32269.70 7262.83 63843.65 00:19:27.044 ======================================================== 00:19:27.044 Total : 129.00 16.12 32269.70 7262.83 63843.65 00:19:27.044 00:19:27.044 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:27.044 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:27.044 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.044 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.303 rmmod nvme_tcp 00:19:27.303 rmmod nvme_fabrics 00:19:27.303 rmmod nvme_keyring 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2721479 ']' 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2721479 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2721479 ']' 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2721479 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2721479 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2721479' 00:19:27.303 killing process with pid 2721479 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2721479 00:19:27.303 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2721479 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.562 10:47:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:29.490 00:19:29.490 real 0m10.090s 00:19:29.490 user 0m3.779s 00:19:29.490 sys 0m4.693s 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.490 ************************************ 00:19:29.490 END TEST nvmf_wait_for_buf 00:19:29.490 ************************************ 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.490 10:47:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:34.852 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:34.853 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:34.853 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:34.853 Found net devices under 0000:86:00.0: cvl_0_0 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:34.853 Found net devices under 0000:86:00.1: cvl_0_1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.853 ************************************ 00:19:34.853 START TEST nvmf_perf_adq 00:19:34.853 ************************************ 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:34.853 * Looking for test storage... 00:19:34.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.853 --rc genhtml_branch_coverage=1 00:19:34.853 --rc genhtml_function_coverage=1 00:19:34.853 --rc genhtml_legend=1 00:19:34.853 --rc geninfo_all_blocks=1 00:19:34.853 --rc geninfo_unexecuted_blocks=1 00:19:34.853 00:19:34.853 ' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.853 --rc genhtml_branch_coverage=1 00:19:34.853 --rc genhtml_function_coverage=1 00:19:34.853 --rc genhtml_legend=1 00:19:34.853 --rc geninfo_all_blocks=1 00:19:34.853 --rc geninfo_unexecuted_blocks=1 00:19:34.853 00:19:34.853 ' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.853 --rc genhtml_branch_coverage=1 00:19:34.853 --rc genhtml_function_coverage=1 00:19:34.853 --rc genhtml_legend=1 00:19:34.853 --rc geninfo_all_blocks=1 00:19:34.853 --rc geninfo_unexecuted_blocks=1 00:19:34.853 00:19:34.853 ' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:34.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.853 --rc genhtml_branch_coverage=1 00:19:34.853 --rc genhtml_function_coverage=1 00:19:34.853 --rc genhtml_legend=1 00:19:34.853 --rc geninfo_all_blocks=1 00:19:34.853 --rc geninfo_unexecuted_blocks=1 00:19:34.853 00:19:34.853 ' 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.853 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.854 10:48:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.127 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.127 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.127 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.127 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:40.127 10:48:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:41.512 10:48:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:43.413 10:48:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:48.683 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.684 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.684 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.684 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.685 10:48:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:19:48.685 00:19:48.685 --- 10.0.0.2 ping statistics --- 00:19:48.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.685 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:19:48.685 00:19:48.685 --- 10.0.0.1 ping statistics --- 00:19:48.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.685 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2730115 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2730115 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2730115 ']' 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 [2024-11-07 10:48:16.117011] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:19:48.685 [2024-11-07 10:48:16.117060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.685 [2024-11-07 10:48:16.184343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.685 [2024-11-07 10:48:16.229203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.685 [2024-11-07 10:48:16.229245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.685 [2024-11-07 10:48:16.229252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.685 [2024-11-07 10:48:16.229258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.685 [2024-11-07 10:48:16.229263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.685 [2024-11-07 10:48:16.230733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.685 [2024-11-07 10:48:16.230828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.685 [2024-11-07 10:48:16.230917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.685 [2024-11-07 10:48:16.230919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.685 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 [2024-11-07 10:48:16.435390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 Malloc1 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 [2024-11-07 10:48:16.502970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2730339 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:48.944 10:48:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:51.474 "tick_rate": 2300000000, 00:19:51.474 "poll_groups": [ 00:19:51.474 { 00:19:51.474 "name": "nvmf_tgt_poll_group_000", 00:19:51.474 "admin_qpairs": 1, 00:19:51.474 "io_qpairs": 1, 00:19:51.474 "current_admin_qpairs": 1, 00:19:51.474 "current_io_qpairs": 1, 00:19:51.474 "pending_bdev_io": 0, 00:19:51.474 "completed_nvme_io": 19261, 00:19:51.474 "transports": [ 00:19:51.474 { 00:19:51.474 "trtype": "TCP" 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 }, 00:19:51.474 { 00:19:51.474 "name": "nvmf_tgt_poll_group_001", 00:19:51.474 "admin_qpairs": 0, 00:19:51.474 "io_qpairs": 1, 00:19:51.474 "current_admin_qpairs": 0, 00:19:51.474 "current_io_qpairs": 1, 00:19:51.474 "pending_bdev_io": 0, 00:19:51.474 "completed_nvme_io": 19447, 00:19:51.474 "transports": [ 00:19:51.474 { 00:19:51.474 "trtype": "TCP" 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 }, 00:19:51.474 { 00:19:51.474 "name": "nvmf_tgt_poll_group_002", 00:19:51.474 "admin_qpairs": 0, 00:19:51.474 "io_qpairs": 1, 00:19:51.474 "current_admin_qpairs": 0, 00:19:51.474 "current_io_qpairs": 1, 00:19:51.474 "pending_bdev_io": 0, 00:19:51.474 "completed_nvme_io": 19493, 00:19:51.474 "transports": [ 00:19:51.474 { 00:19:51.474 "trtype": "TCP" 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 }, 00:19:51.474 { 00:19:51.474 "name": "nvmf_tgt_poll_group_003", 00:19:51.474 "admin_qpairs": 0, 00:19:51.474 "io_qpairs": 1, 00:19:51.474 "current_admin_qpairs": 0, 00:19:51.474 "current_io_qpairs": 1, 00:19:51.474 "pending_bdev_io": 0, 00:19:51.474 "completed_nvme_io": 19088, 00:19:51.474 "transports": [ 00:19:51.474 { 00:19:51.474 "trtype": "TCP" 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 }' 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:51.474 10:48:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2730339 00:19:59.586 Initializing NVMe Controllers 00:19:59.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:59.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:59.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:59.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:59.586 Initialization complete. Launching workers. 00:19:59.586 ======================================================== 00:19:59.586 Latency(us) 00:19:59.586 Device Information : IOPS MiB/s Average min max 00:19:59.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10122.00 39.54 6323.54 2436.67 10549.69 00:19:59.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10445.30 40.80 6126.21 1924.36 10558.81 00:19:59.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10411.50 40.67 6147.47 2048.09 10254.10 00:19:59.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10278.40 40.15 6225.53 1867.23 10697.93 00:19:59.586 ======================================================== 00:19:59.586 Total : 41257.19 161.16 6204.73 1867.23 10697.93 00:19:59.586 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.586 rmmod nvme_tcp 00:19:59.586 rmmod nvme_fabrics 00:19:59.586 rmmod nvme_keyring 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2730115 ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2730115 ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2730115' 00:19:59.586 killing process with pid 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2730115 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.586 10:48:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.491 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.491 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:01.492 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:01.492 10:48:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:02.868 10:48:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:04.771 10:48:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:10.045 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.045 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:10.046 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:10.046 Found net devices under 0000:86:00.0: cvl_0_0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:10.046 Found net devices under 0000:86:00.1: cvl_0_1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:10.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:20:10.046 00:20:10.046 --- 10.0.0.2 ping statistics --- 00:20:10.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.046 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:10.046 00:20:10.046 --- 10.0.0.1 ping statistics --- 00:20:10.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.046 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:10.046 net.core.busy_poll = 1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:10.046 net.core.busy_read = 1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2734141 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2734141 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2734141 ']' 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.046 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.046 [2024-11-07 10:48:37.709220] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:10.047 [2024-11-07 10:48:37.709265] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.305 [2024-11-07 10:48:37.776087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.305 [2024-11-07 10:48:37.818701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.305 [2024-11-07 10:48:37.818741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.305 [2024-11-07 10:48:37.818749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.305 [2024-11-07 10:48:37.818755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.305 [2024-11-07 10:48:37.818761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.305 [2024-11-07 10:48:37.820351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.305 [2024-11-07 10:48:37.820466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.305 [2024-11-07 10:48:37.820511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.305 [2024-11-07 10:48:37.820513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.305 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.306 10:48:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.564 [2024-11-07 10:48:38.021347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.564 Malloc1 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.564 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:10.565 [2024-11-07 10:48:38.084060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2734166 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:10.565 10:48:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:12.465 "tick_rate": 2300000000, 00:20:12.465 "poll_groups": [ 00:20:12.465 { 00:20:12.465 "name": "nvmf_tgt_poll_group_000", 00:20:12.465 "admin_qpairs": 1, 00:20:12.465 "io_qpairs": 2, 00:20:12.465 "current_admin_qpairs": 1, 00:20:12.465 "current_io_qpairs": 2, 00:20:12.465 "pending_bdev_io": 0, 00:20:12.465 "completed_nvme_io": 28037, 00:20:12.465 "transports": [ 00:20:12.465 { 00:20:12.465 "trtype": "TCP" 00:20:12.465 } 00:20:12.465 ] 00:20:12.465 }, 00:20:12.465 { 00:20:12.465 "name": "nvmf_tgt_poll_group_001", 00:20:12.465 "admin_qpairs": 0, 00:20:12.465 "io_qpairs": 2, 00:20:12.465 "current_admin_qpairs": 0, 00:20:12.465 "current_io_qpairs": 2, 00:20:12.465 "pending_bdev_io": 0, 00:20:12.465 "completed_nvme_io": 28648, 00:20:12.465 "transports": [ 00:20:12.465 { 00:20:12.465 "trtype": "TCP" 00:20:12.465 } 00:20:12.465 ] 00:20:12.465 }, 00:20:12.465 { 00:20:12.465 "name": "nvmf_tgt_poll_group_002", 00:20:12.465 "admin_qpairs": 0, 00:20:12.465 "io_qpairs": 0, 00:20:12.465 "current_admin_qpairs": 0, 00:20:12.465 "current_io_qpairs": 0, 00:20:12.465 "pending_bdev_io": 0, 00:20:12.465 "completed_nvme_io": 0, 00:20:12.465 "transports": [ 00:20:12.465 { 00:20:12.465 "trtype": "TCP" 00:20:12.465 } 00:20:12.465 ] 00:20:12.465 }, 00:20:12.465 { 00:20:12.465 "name": "nvmf_tgt_poll_group_003", 00:20:12.465 "admin_qpairs": 0, 00:20:12.465 "io_qpairs": 0, 00:20:12.465 "current_admin_qpairs": 0, 00:20:12.465 "current_io_qpairs": 0, 00:20:12.465 "pending_bdev_io": 0, 00:20:12.465 "completed_nvme_io": 0, 00:20:12.465 "transports": [ 00:20:12.465 { 00:20:12.465 "trtype": "TCP" 00:20:12.465 } 00:20:12.465 ] 00:20:12.465 } 00:20:12.465 ] 00:20:12.465 }' 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:12.465 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:12.724 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:12.724 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:12.724 10:48:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2734166 00:20:20.836 Initializing NVMe Controllers 00:20:20.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:20.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:20.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:20.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:20.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:20.836 Initialization complete. Launching workers. 00:20:20.836 ======================================================== 00:20:20.836 Latency(us) 00:20:20.836 Device Information : IOPS MiB/s Average min max 00:20:20.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8720.94 34.07 7340.46 1431.86 53931.07 00:20:20.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7513.98 29.35 8516.80 1460.19 54830.80 00:20:20.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7623.97 29.78 8420.26 1554.10 52949.70 00:20:20.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6393.21 24.97 10010.17 1600.27 53490.52 00:20:20.836 ======================================================== 00:20:20.836 Total : 30252.10 118.17 8468.96 1431.86 54830.80 00:20:20.836 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.836 rmmod nvme_tcp 00:20:20.836 rmmod nvme_fabrics 00:20:20.836 rmmod nvme_keyring 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2734141 ']' 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2734141 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2734141 ']' 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2734141 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2734141 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2734141' 00:20:20.836 killing process with pid 2734141 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2734141 00:20:20.836 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2734141 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.095 10:48:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:24.384 00:20:24.384 real 0m49.561s 00:20:24.384 user 2m43.844s 00:20:24.384 sys 0m9.758s 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.384 ************************************ 00:20:24.384 END TEST nvmf_perf_adq 00:20:24.384 ************************************ 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:24.384 ************************************ 00:20:24.384 START TEST nvmf_shutdown 00:20:24.384 ************************************ 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:24.384 * Looking for test storage... 00:20:24.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:24.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.384 --rc genhtml_branch_coverage=1 00:20:24.384 --rc genhtml_function_coverage=1 00:20:24.384 --rc genhtml_legend=1 00:20:24.384 --rc geninfo_all_blocks=1 00:20:24.384 --rc geninfo_unexecuted_blocks=1 00:20:24.384 00:20:24.384 ' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:24.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.384 --rc genhtml_branch_coverage=1 00:20:24.384 --rc genhtml_function_coverage=1 00:20:24.384 --rc genhtml_legend=1 00:20:24.384 --rc geninfo_all_blocks=1 00:20:24.384 --rc geninfo_unexecuted_blocks=1 00:20:24.384 00:20:24.384 ' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:24.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.384 --rc genhtml_branch_coverage=1 00:20:24.384 --rc genhtml_function_coverage=1 00:20:24.384 --rc genhtml_legend=1 00:20:24.384 --rc geninfo_all_blocks=1 00:20:24.384 --rc geninfo_unexecuted_blocks=1 00:20:24.384 00:20:24.384 ' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:24.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.384 --rc genhtml_branch_coverage=1 00:20:24.384 --rc genhtml_function_coverage=1 00:20:24.384 --rc genhtml_legend=1 00:20:24.384 --rc geninfo_all_blocks=1 00:20:24.384 --rc geninfo_unexecuted_blocks=1 00:20:24.384 00:20:24.384 ' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.384 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:24.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:24.385 ************************************ 00:20:24.385 START TEST nvmf_shutdown_tc1 00:20:24.385 ************************************ 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:24.385 10:48:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.950 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:30.951 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:30.951 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:30.951 Found net devices under 0000:86:00.0: cvl_0_0 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:30.951 Found net devices under 0000:86:00.1: cvl_0_1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:30.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:20:30.951 00:20:30.951 --- 10.0.0.2 ping statistics --- 00:20:30.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.951 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:20:30.951 00:20:30.951 --- 10.0.0.1 ping statistics --- 00:20:30.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.951 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2739613 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2739613 00:20:30.951 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2739613 ']' 00:20:30.952 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.952 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.952 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.952 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.952 10:48:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 [2024-11-07 10:48:57.881003] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:30.952 [2024-11-07 10:48:57.881050] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.952 [2024-11-07 10:48:57.947361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.952 [2024-11-07 10:48:57.988052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.952 [2024-11-07 10:48:57.988092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.952 [2024-11-07 10:48:57.988099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.952 [2024-11-07 10:48:57.988105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.952 [2024-11-07 10:48:57.988110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.952 [2024-11-07 10:48:57.989644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.952 [2024-11-07 10:48:57.989732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.952 [2024-11-07 10:48:57.989818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.952 [2024-11-07 10:48:57.989819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 [2024-11-07 10:48:58.134574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.952 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.952 Malloc1 00:20:30.952 [2024-11-07 10:48:58.252186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.952 Malloc2 00:20:30.952 Malloc3 00:20:30.952 Malloc4 00:20:30.952 Malloc5 00:20:30.952 Malloc6 00:20:30.952 Malloc7 00:20:30.952 Malloc8 00:20:30.952 Malloc9 00:20:31.211 Malloc10 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2739776 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2739776 /var/tmp/bdevperf.sock 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2739776 ']' 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.211 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.211 { 00:20:31.211 "params": { 00:20:31.211 "name": "Nvme$subsystem", 00:20:31.211 "trtype": "$TEST_TRANSPORT", 00:20:31.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.211 "adrfam": "ipv4", 00:20:31.211 "trsvcid": "$NVMF_PORT", 00:20:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.211 "hdgst": ${hdgst:-false}, 00:20:31.211 "ddgst": ${ddgst:-false} 00:20:31.211 }, 00:20:31.211 "method": "bdev_nvme_attach_controller" 00:20:31.211 } 00:20:31.211 EOF 00:20:31.211 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 [2024-11-07 10:48:58.726035] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:31.212 [2024-11-07 10:48:58.726086] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:31.212 { 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme$subsystem", 00:20:31.212 "trtype": "$TEST_TRANSPORT", 00:20:31.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "$NVMF_PORT", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.212 "hdgst": ${hdgst:-false}, 00:20:31.212 "ddgst": ${ddgst:-false} 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 } 00:20:31.212 EOF 00:20:31.212 )") 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:31.212 10:48:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme1", 00:20:31.212 "trtype": "tcp", 00:20:31.212 "traddr": "10.0.0.2", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "4420", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.212 "hdgst": false, 00:20:31.212 "ddgst": false 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 },{ 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme2", 00:20:31.212 "trtype": "tcp", 00:20:31.212 "traddr": "10.0.0.2", 00:20:31.212 "adrfam": "ipv4", 00:20:31.212 "trsvcid": "4420", 00:20:31.212 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.212 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.212 "hdgst": false, 00:20:31.212 "ddgst": false 00:20:31.212 }, 00:20:31.212 "method": "bdev_nvme_attach_controller" 00:20:31.212 },{ 00:20:31.212 "params": { 00:20:31.212 "name": "Nvme3", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme4", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme5", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme6", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme7", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme8", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme9", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 },{ 00:20:31.213 "params": { 00:20:31.213 "name": "Nvme10", 00:20:31.213 "trtype": "tcp", 00:20:31.213 "traddr": "10.0.0.2", 00:20:31.213 "adrfam": "ipv4", 00:20:31.213 "trsvcid": "4420", 00:20:31.213 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:31.213 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:31.213 "hdgst": false, 00:20:31.213 "ddgst": false 00:20:31.213 }, 00:20:31.213 "method": "bdev_nvme_attach_controller" 00:20:31.213 }' 00:20:31.213 [2024-11-07 10:48:58.790484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.213 [2024-11-07 10:48:58.832303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2739776 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:33.112 10:49:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:34.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2739776 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2739613 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.046 { 00:20:34.046 "params": { 00:20:34.046 "name": "Nvme$subsystem", 00:20:34.046 "trtype": "$TEST_TRANSPORT", 00:20:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.046 "adrfam": "ipv4", 00:20:34.046 "trsvcid": "$NVMF_PORT", 00:20:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.046 "hdgst": ${hdgst:-false}, 00:20:34.046 "ddgst": ${ddgst:-false} 00:20:34.046 }, 00:20:34.046 "method": "bdev_nvme_attach_controller" 00:20:34.046 } 00:20:34.046 EOF 00:20:34.046 )") 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.046 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.046 { 00:20:34.046 "params": { 00:20:34.046 "name": "Nvme$subsystem", 00:20:34.046 "trtype": "$TEST_TRANSPORT", 00:20:34.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.046 "adrfam": "ipv4", 00:20:34.046 "trsvcid": "$NVMF_PORT", 00:20:34.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.046 "hdgst": ${hdgst:-false}, 00:20:34.046 "ddgst": ${ddgst:-false} 00:20:34.046 }, 00:20:34.046 "method": "bdev_nvme_attach_controller" 00:20:34.046 } 00:20:34.046 EOF 00:20:34.046 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 [2024-11-07 10:49:01.676572] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:34.047 [2024-11-07 10:49:01.676633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740325 ] 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:34.047 { 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme$subsystem", 00:20:34.047 "trtype": "$TEST_TRANSPORT", 00:20:34.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "$NVMF_PORT", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.047 "hdgst": ${hdgst:-false}, 00:20:34.047 "ddgst": ${ddgst:-false} 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 } 00:20:34.047 EOF 00:20:34.047 )") 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:34.047 10:49:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme1", 00:20:34.047 "trtype": "tcp", 00:20:34.047 "traddr": "10.0.0.2", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "4420", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.047 "hdgst": false, 00:20:34.047 "ddgst": false 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 },{ 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme2", 00:20:34.047 "trtype": "tcp", 00:20:34.047 "traddr": "10.0.0.2", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "4420", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.047 "hdgst": false, 00:20:34.047 "ddgst": false 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 },{ 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme3", 00:20:34.047 "trtype": "tcp", 00:20:34.047 "traddr": "10.0.0.2", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "4420", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:34.047 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:34.047 "hdgst": false, 00:20:34.047 "ddgst": false 00:20:34.047 }, 00:20:34.047 "method": "bdev_nvme_attach_controller" 00:20:34.047 },{ 00:20:34.047 "params": { 00:20:34.047 "name": "Nvme4", 00:20:34.047 "trtype": "tcp", 00:20:34.047 "traddr": "10.0.0.2", 00:20:34.047 "adrfam": "ipv4", 00:20:34.047 "trsvcid": "4420", 00:20:34.047 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme5", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme6", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme7", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme8", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme9", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 },{ 00:20:34.048 "params": { 00:20:34.048 "name": "Nvme10", 00:20:34.048 "trtype": "tcp", 00:20:34.048 "traddr": "10.0.0.2", 00:20:34.048 "adrfam": "ipv4", 00:20:34.048 "trsvcid": "4420", 00:20:34.048 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:34.048 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:34.048 "hdgst": false, 00:20:34.048 "ddgst": false 00:20:34.048 }, 00:20:34.048 "method": "bdev_nvme_attach_controller" 00:20:34.048 }' 00:20:34.306 [2024-11-07 10:49:01.741860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.306 [2024-11-07 10:49:01.783227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.679 Running I/O for 1 seconds... 00:20:36.614 2250.00 IOPS, 140.62 MiB/s 00:20:36.614 Latency(us) 00:20:36.614 [2024-11-07T09:49:04.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.614 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.614 Verification LBA range: start 0x0 length 0x400 00:20:36.614 Nvme1n1 : 1.17 273.73 17.11 0.00 0.00 231725.86 19147.91 218833.25 00:20:36.614 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.614 Verification LBA range: start 0x0 length 0x400 00:20:36.614 Nvme2n1 : 1.10 232.05 14.50 0.00 0.00 269267.92 16868.40 227951.30 00:20:36.614 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.614 Verification LBA range: start 0x0 length 0x400 00:20:36.614 Nvme3n1 : 1.13 288.06 18.00 0.00 0.00 210243.17 15158.76 211538.81 00:20:36.614 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.614 Verification LBA range: start 0x0 length 0x400 00:20:36.614 Nvme4n1 : 1.16 276.39 17.27 0.00 0.00 219832.41 12480.33 224304.08 00:20:36.614 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.614 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme5n1 : 1.18 272.30 17.02 0.00 0.00 218790.69 16184.54 220656.86 00:20:36.615 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.615 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme6n1 : 1.17 274.36 17.15 0.00 0.00 215021.66 16184.54 214274.23 00:20:36.615 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.615 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme7n1 : 1.16 274.91 17.18 0.00 0.00 211530.62 18350.08 217921.45 00:20:36.615 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.615 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme8n1 : 1.18 271.75 16.98 0.00 0.00 211004.55 13563.10 237069.36 00:20:36.615 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.615 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme9n1 : 1.18 270.10 16.88 0.00 0.00 209258.41 18919.96 233422.14 00:20:36.615 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:36.615 Verification LBA range: start 0x0 length 0x400 00:20:36.615 Nvme10n1 : 1.18 270.51 16.91 0.00 0.00 205784.46 13734.07 240716.58 00:20:36.615 [2024-11-07T09:49:04.286Z] =================================================================================================================== 00:20:36.615 [2024-11-07T09:49:04.286Z] Total : 2704.17 169.01 0.00 0.00 219231.20 12480.33 240716.58 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.873 rmmod nvme_tcp 00:20:36.873 rmmod nvme_fabrics 00:20:36.873 rmmod nvme_keyring 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2739613 ']' 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2739613 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2739613 ']' 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2739613 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.873 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2739613 00:20:37.132 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:37.132 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:37.132 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2739613' 00:20:37.132 killing process with pid 2739613 00:20:37.132 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2739613 00:20:37.132 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2739613 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.390 10:49:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.923 00:20:39.923 real 0m15.027s 00:20:39.923 user 0m33.522s 00:20:39.923 sys 0m5.714s 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.923 ************************************ 00:20:39.923 END TEST nvmf_shutdown_tc1 00:20:39.923 ************************************ 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:39.923 ************************************ 00:20:39.923 START TEST nvmf_shutdown_tc2 00:20:39.923 ************************************ 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.923 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.924 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.924 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.924 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:39.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:20:39.925 00:20:39.925 --- 10.0.0.2 ping statistics --- 00:20:39.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.925 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:20:39.925 00:20:39.925 --- 10.0.0.1 ping statistics --- 00:20:39.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.925 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2741364 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2741364 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2741364 ']' 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:39.925 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.925 [2024-11-07 10:49:07.430200] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:39.925 [2024-11-07 10:49:07.430245] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.925 [2024-11-07 10:49:07.498003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.925 [2024-11-07 10:49:07.540668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.925 [2024-11-07 10:49:07.540706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.925 [2024-11-07 10:49:07.540713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.925 [2024-11-07 10:49:07.540719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.925 [2024-11-07 10:49:07.540724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.925 [2024-11-07 10:49:07.542303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.925 [2024-11-07 10:49:07.542390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.925 [2024-11-07 10:49:07.542500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:39.925 [2024-11-07 10:49:07.542501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.183 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:40.183 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:40.183 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.184 [2024-11-07 10:49:07.679760] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.184 10:49:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.184 Malloc1 00:20:40.184 [2024-11-07 10:49:07.799133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.184 Malloc2 00:20:40.442 Malloc3 00:20:40.442 Malloc4 00:20:40.442 Malloc5 00:20:40.442 Malloc6 00:20:40.442 Malloc7 00:20:40.442 Malloc8 00:20:40.753 Malloc9 00:20:40.753 Malloc10 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2741468 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2741468 /var/tmp/bdevperf.sock 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2741468 ']' 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.753 "trtype": "$TEST_TRANSPORT", 00:20:40.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.753 "adrfam": "ipv4", 00:20:40.753 "trsvcid": "$NVMF_PORT", 00:20:40.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.753 "hdgst": ${hdgst:-false}, 00:20:40.753 "ddgst": ${ddgst:-false} 00:20:40.753 }, 00:20:40.753 "method": "bdev_nvme_attach_controller" 00:20:40.753 } 00:20:40.753 EOF 00:20:40.753 )") 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.753 "trtype": "$TEST_TRANSPORT", 00:20:40.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.753 "adrfam": "ipv4", 00:20:40.753 "trsvcid": "$NVMF_PORT", 00:20:40.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.753 "hdgst": ${hdgst:-false}, 00:20:40.753 "ddgst": ${ddgst:-false} 00:20:40.753 }, 00:20:40.753 "method": "bdev_nvme_attach_controller" 00:20:40.753 } 00:20:40.753 EOF 00:20:40.753 )") 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.753 "trtype": "$TEST_TRANSPORT", 00:20:40.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.753 "adrfam": "ipv4", 00:20:40.753 "trsvcid": "$NVMF_PORT", 00:20:40.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.753 "hdgst": ${hdgst:-false}, 00:20:40.753 "ddgst": ${ddgst:-false} 00:20:40.753 }, 00:20:40.753 "method": "bdev_nvme_attach_controller" 00:20:40.753 } 00:20:40.753 EOF 00:20:40.753 )") 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.753 "trtype": "$TEST_TRANSPORT", 00:20:40.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.753 "adrfam": "ipv4", 00:20:40.753 "trsvcid": "$NVMF_PORT", 00:20:40.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.753 "hdgst": ${hdgst:-false}, 00:20:40.753 "ddgst": ${ddgst:-false} 00:20:40.753 }, 00:20:40.753 "method": "bdev_nvme_attach_controller" 00:20:40.753 } 00:20:40.753 EOF 00:20:40.753 )") 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.753 "trtype": "$TEST_TRANSPORT", 00:20:40.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.753 "adrfam": "ipv4", 00:20:40.753 "trsvcid": "$NVMF_PORT", 00:20:40.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.753 "hdgst": ${hdgst:-false}, 00:20:40.753 "ddgst": ${ddgst:-false} 00:20:40.753 }, 00:20:40.753 "method": "bdev_nvme_attach_controller" 00:20:40.753 } 00:20:40.753 EOF 00:20:40.753 )") 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.753 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.753 { 00:20:40.753 "params": { 00:20:40.753 "name": "Nvme$subsystem", 00:20:40.754 "trtype": "$TEST_TRANSPORT", 00:20:40.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "$NVMF_PORT", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.754 "hdgst": ${hdgst:-false}, 00:20:40.754 "ddgst": ${ddgst:-false} 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 } 00:20:40.754 EOF 00:20:40.754 )") 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.754 { 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme$subsystem", 00:20:40.754 "trtype": "$TEST_TRANSPORT", 00:20:40.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "$NVMF_PORT", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.754 "hdgst": ${hdgst:-false}, 00:20:40.754 "ddgst": ${ddgst:-false} 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 } 00:20:40.754 EOF 00:20:40.754 )") 00:20:40.754 [2024-11-07 10:49:08.269032] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:40.754 [2024-11-07 10:49:08.269079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741468 ] 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.754 { 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme$subsystem", 00:20:40.754 "trtype": "$TEST_TRANSPORT", 00:20:40.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "$NVMF_PORT", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.754 "hdgst": ${hdgst:-false}, 00:20:40.754 "ddgst": ${ddgst:-false} 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 } 00:20:40.754 EOF 00:20:40.754 )") 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.754 { 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme$subsystem", 00:20:40.754 "trtype": "$TEST_TRANSPORT", 00:20:40.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "$NVMF_PORT", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.754 "hdgst": ${hdgst:-false}, 00:20:40.754 "ddgst": ${ddgst:-false} 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 } 00:20:40.754 EOF 00:20:40.754 )") 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:40.754 { 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme$subsystem", 00:20:40.754 "trtype": "$TEST_TRANSPORT", 00:20:40.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "$NVMF_PORT", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.754 "hdgst": ${hdgst:-false}, 00:20:40.754 "ddgst": ${ddgst:-false} 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 } 00:20:40.754 EOF 00:20:40.754 )") 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:40.754 10:49:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme1", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme2", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme3", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme4", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme5", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme6", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme7", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme8", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme9", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 },{ 00:20:40.754 "params": { 00:20:40.754 "name": "Nvme10", 00:20:40.754 "trtype": "tcp", 00:20:40.754 "traddr": "10.0.0.2", 00:20:40.754 "adrfam": "ipv4", 00:20:40.754 "trsvcid": "4420", 00:20:40.754 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:40.754 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:40.754 "hdgst": false, 00:20:40.754 "ddgst": false 00:20:40.754 }, 00:20:40.754 "method": "bdev_nvme_attach_controller" 00:20:40.754 }' 00:20:40.754 [2024-11-07 10:49:08.332715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.754 [2024-11-07 10:49:08.375596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.175 Running I/O for 10 seconds... 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:42.742 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2741468 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2741468 ']' 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2741468 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2741468 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2741468' 00:20:43.001 killing process with pid 2741468 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2741468 00:20:43.001 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2741468 00:20:43.001 Received shutdown signal, test time was about 0.790288 seconds 00:20:43.001 00:20:43.001 Latency(us) 00:20:43.001 [2024-11-07T09:49:10.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme1n1 : 0.77 250.37 15.65 0.00 0.00 252376.23 19033.93 215186.03 00:20:43.001 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme2n1 : 0.78 247.52 15.47 0.00 0.00 249244.27 18805.98 219745.06 00:20:43.001 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme3n1 : 0.76 252.27 15.77 0.00 0.00 239787.26 23706.94 215186.03 00:20:43.001 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme4n1 : 0.76 253.09 15.82 0.00 0.00 233688.08 17894.18 216097.84 00:20:43.001 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme5n1 : 0.77 277.33 17.33 0.00 0.00 203668.07 16982.37 219745.06 00:20:43.001 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme6n1 : 0.79 324.21 20.26 0.00 0.00 174983.79 16526.47 219745.06 00:20:43.001 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme7n1 : 0.79 325.53 20.35 0.00 0.00 168594.59 10029.86 218833.25 00:20:43.001 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme8n1 : 0.77 247.81 15.49 0.00 0.00 217831.22 15158.76 220656.86 00:20:43.001 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme9n1 : 0.78 245.17 15.32 0.00 0.00 214673.73 33052.94 231598.53 00:20:43.001 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.001 Verification LBA range: start 0x0 length 0x400 00:20:43.001 Nvme10n1 : 0.78 245.42 15.34 0.00 0.00 209699.17 16754.42 251658.24 00:20:43.001 [2024-11-07T09:49:10.672Z] =================================================================================================================== 00:20:43.001 [2024-11-07T09:49:10.672Z] Total : 2668.73 166.80 0.00 0.00 213561.60 10029.86 251658.24 00:20:43.259 10:49:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2741364 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.192 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.192 rmmod nvme_tcp 00:20:44.192 rmmod nvme_fabrics 00:20:44.192 rmmod nvme_keyring 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2741364 ']' 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2741364 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2741364 ']' 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2741364 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2741364 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2741364' 00:20:44.451 killing process with pid 2741364 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2741364 00:20:44.451 10:49:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2741364 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.710 10:49:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:47.244 00:20:47.244 real 0m7.257s 00:20:47.244 user 0m21.390s 00:20:47.244 sys 0m1.228s 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.244 ************************************ 00:20:47.244 END TEST nvmf_shutdown_tc2 00:20:47.244 ************************************ 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:47.244 ************************************ 00:20:47.244 START TEST nvmf_shutdown_tc3 00:20:47.244 ************************************ 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.244 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.244 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.245 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:20:47.245 00:20:47.245 --- 10.0.0.2 ping statistics --- 00:20:47.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.245 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:47.245 00:20:47.245 --- 10.0.0.1 ping statistics --- 00:20:47.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.245 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2742734 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2742734 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2742734 ']' 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:47.245 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.245 [2024-11-07 10:49:14.788525] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:47.245 [2024-11-07 10:49:14.788571] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.245 [2024-11-07 10:49:14.855061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.245 [2024-11-07 10:49:14.897642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.245 [2024-11-07 10:49:14.897683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.245 [2024-11-07 10:49:14.897690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.245 [2024-11-07 10:49:14.897697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.245 [2024-11-07 10:49:14.897703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.245 [2024-11-07 10:49:14.899161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.245 [2024-11-07 10:49:14.899251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.245 [2024-11-07 10:49:14.899358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.245 [2024-11-07 10:49:14.899359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:47.504 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.504 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:47.504 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.504 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.504 10:49:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 [2024-11-07 10:49:15.036753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.504 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.504 Malloc1 00:20:47.504 [2024-11-07 10:49:15.153624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.762 Malloc2 00:20:47.762 Malloc3 00:20:47.762 Malloc4 00:20:47.762 Malloc5 00:20:47.762 Malloc6 00:20:47.762 Malloc7 00:20:48.021 Malloc8 00:20:48.021 Malloc9 00:20:48.021 Malloc10 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2742811 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2742811 /var/tmp/bdevperf.sock 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2742811 ']' 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.021 "hdgst": ${hdgst:-false}, 00:20:48.021 "ddgst": ${ddgst:-false} 00:20:48.021 }, 00:20:48.021 "method": "bdev_nvme_attach_controller" 00:20:48.021 } 00:20:48.021 EOF 00:20:48.021 )") 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.021 "hdgst": ${hdgst:-false}, 00:20:48.021 "ddgst": ${ddgst:-false} 00:20:48.021 }, 00:20:48.021 "method": "bdev_nvme_attach_controller" 00:20:48.021 } 00:20:48.021 EOF 00:20:48.021 )") 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.021 "hdgst": ${hdgst:-false}, 00:20:48.021 "ddgst": ${ddgst:-false} 00:20:48.021 }, 00:20:48.021 "method": "bdev_nvme_attach_controller" 00:20:48.021 } 00:20:48.021 EOF 00:20:48.021 )") 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.021 "hdgst": ${hdgst:-false}, 00:20:48.021 "ddgst": ${ddgst:-false} 00:20:48.021 }, 00:20:48.021 "method": "bdev_nvme_attach_controller" 00:20:48.021 } 00:20:48.021 EOF 00:20:48.021 )") 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.021 "hdgst": ${hdgst:-false}, 00:20:48.021 "ddgst": ${ddgst:-false} 00:20:48.021 }, 00:20:48.021 "method": "bdev_nvme_attach_controller" 00:20:48.021 } 00:20:48.021 EOF 00:20:48.021 )") 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.021 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.021 { 00:20:48.021 "params": { 00:20:48.021 "name": "Nvme$subsystem", 00:20:48.021 "trtype": "$TEST_TRANSPORT", 00:20:48.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.021 "adrfam": "ipv4", 00:20:48.021 "trsvcid": "$NVMF_PORT", 00:20:48.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.022 "hdgst": ${hdgst:-false}, 00:20:48.022 "ddgst": ${ddgst:-false} 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 } 00:20:48.022 EOF 00:20:48.022 )") 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.022 { 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme$subsystem", 00:20:48.022 "trtype": "$TEST_TRANSPORT", 00:20:48.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "$NVMF_PORT", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.022 "hdgst": ${hdgst:-false}, 00:20:48.022 "ddgst": ${ddgst:-false} 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 } 00:20:48.022 EOF 00:20:48.022 )") 00:20:48.022 [2024-11-07 10:49:15.624951] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:48.022 [2024-11-07 10:49:15.625003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742811 ] 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.022 { 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme$subsystem", 00:20:48.022 "trtype": "$TEST_TRANSPORT", 00:20:48.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "$NVMF_PORT", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.022 "hdgst": ${hdgst:-false}, 00:20:48.022 "ddgst": ${ddgst:-false} 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 } 00:20:48.022 EOF 00:20:48.022 )") 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.022 { 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme$subsystem", 00:20:48.022 "trtype": "$TEST_TRANSPORT", 00:20:48.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "$NVMF_PORT", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.022 "hdgst": ${hdgst:-false}, 00:20:48.022 "ddgst": ${ddgst:-false} 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 } 00:20:48.022 EOF 00:20:48.022 )") 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:48.022 { 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme$subsystem", 00:20:48.022 "trtype": "$TEST_TRANSPORT", 00:20:48.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "$NVMF_PORT", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.022 "hdgst": ${hdgst:-false}, 00:20:48.022 "ddgst": ${ddgst:-false} 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 } 00:20:48.022 EOF 00:20:48.022 )") 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:48.022 10:49:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme1", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme2", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme3", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme4", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme5", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme6", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme7", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme8", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme9", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 },{ 00:20:48.022 "params": { 00:20:48.022 "name": "Nvme10", 00:20:48.022 "trtype": "tcp", 00:20:48.022 "traddr": "10.0.0.2", 00:20:48.022 "adrfam": "ipv4", 00:20:48.022 "trsvcid": "4420", 00:20:48.022 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:48.022 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:48.022 "hdgst": false, 00:20:48.022 "ddgst": false 00:20:48.022 }, 00:20:48.022 "method": "bdev_nvme_attach_controller" 00:20:48.022 }' 00:20:48.280 [2024-11-07 10:49:15.690977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.280 [2024-11-07 10:49:15.733109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.178 Running I/O for 10 seconds... 00:20:50.178 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.178 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:20:50.178 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:50.178 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.178 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.436 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.437 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:50.437 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:50.437 10:49:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=88 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 88 -ge 100 ']' 00:20:50.695 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2742734 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2742734 ']' 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2742734 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:50.953 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2742734 00:20:51.228 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:51.228 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:51.228 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2742734' 00:20:51.228 killing process with pid 2742734 00:20:51.228 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2742734 00:20:51.228 10:49:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2742734 00:20:51.228 [2024-11-07 10:49:18.653578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.653995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.654072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57070 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.228 [2024-11-07 10:49:18.655257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc9790 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.655835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.229 [2024-11-07 10:49:18.655867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.229 [2024-11-07 10:49:18.655877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.229 [2024-11-07 10:49:18.655889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.229 [2024-11-07 10:49:18.655897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.229 [2024-11-07 10:49:18.655904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.229 [2024-11-07 10:49:18.655912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.229 [2024-11-07 10:49:18.655919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.229 [2024-11-07 10:49:18.655926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d91c0 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.229 [2024-11-07 10:49:18.657374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.657629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57560 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.658931] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.230 [2024-11-07 10:49:18.659159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.230 [2024-11-07 10:49:18.659471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.659588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57a30 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.660997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.661089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57f20 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.662169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.662182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.231 [2024-11-07 10:49:18.662189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.662588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58770 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.232 [2024-11-07 10:49:18.663715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.663965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.671366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e58c40 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.233 [2024-11-07 10:49:18.672382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.672594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59130 is same with the state(6) to be set 00:20:51.234 [2024-11-07 10:49:18.676547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.234 [2024-11-07 10:49:18.676986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.234 [2024-11-07 10:49:18.676996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.235 [2024-11-07 10:49:18.677652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.235 [2024-11-07 10:49:18.677680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:51.235 [2024-11-07 10:49:18.677789] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.235 [2024-11-07 10:49:18.677842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed610 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.677938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.677990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.677999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292afb0 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2905110 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cfe90 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c70 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29036d0 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2929df0 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d91c0 (9): Bad file descriptor 00:20:51.236 [2024-11-07 10:49:18.678500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.236 [2024-11-07 10:49:18.678566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294cbd0 is same with the state(6) to be set 00:20:51.236 [2024-11-07 10:49:18.678592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.236 [2024-11-07 10:49:18.678601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.237 [2024-11-07 10:49:18.678619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.237 [2024-11-07 10:49:18.678634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.237 [2024-11-07 10:49:18.678650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2952900 is same with the state(6) to be set 00:20:51.237 [2024-11-07 10:49:18.678871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.678988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.678995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.237 [2024-11-07 10:49:18.679413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.237 [2024-11-07 10:49:18.679422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.238 [2024-11-07 10:49:18.679960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.238 [2024-11-07 10:49:18.679969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28dd140 is same with the state(6) to be set 00:20:51.238 [2024-11-07 10:49:18.682276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:51.238 [2024-11-07 10:49:18.682304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:51.238 [2024-11-07 10:49:18.682318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29036d0 (9): Bad file descriptor 00:20:51.238 [2024-11-07 10:49:18.682331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2952900 (9): Bad file descriptor 00:20:51.238 [2024-11-07 10:49:18.682391] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683261] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683321] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683365] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.238 [2024-11-07 10:49:18.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2952900 with addr=10.0.0.2, port=4420 00:20:51.238 [2024-11-07 10:49:18.683543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2952900 is same with the state(6) to be set 00:20:51.238 [2024-11-07 10:49:18.683632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.238 [2024-11-07 10:49:18.683646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29036d0 with addr=10.0.0.2, port=4420 00:20:51.238 [2024-11-07 10:49:18.683654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29036d0 is same with the state(6) to be set 00:20:51.238 [2024-11-07 10:49:18.683706] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683756] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.238 [2024-11-07 10:49:18.683821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2952900 (9): Bad file descriptor 00:20:51.238 [2024-11-07 10:49:18.683834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29036d0 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.683887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:51.239 [2024-11-07 10:49:18.683897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:51.239 [2024-11-07 10:49:18.683907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:51.239 [2024-11-07 10:49:18.683917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:51.239 [2024-11-07 10:49:18.683925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:51.239 [2024-11-07 10:49:18.683932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:51.239 [2024-11-07 10:49:18.683940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:51.239 [2024-11-07 10:49:18.683947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:51.239 [2024-11-07 10:49:18.687835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ed610 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292afb0 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2905110 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cfe90 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c70 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2929df0 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.687951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294cbd0 (9): Bad file descriptor 00:20:51.239 [2024-11-07 10:49:18.688068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.239 [2024-11-07 10:49:18.688651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.239 [2024-11-07 10:49:18.688658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.688983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.688993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.240 [2024-11-07 10:49:18.689200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.240 [2024-11-07 10:49:18.689209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26dd460 is same with the state(6) to be set 00:20:51.240 [2024-11-07 10:49:18.690223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:51.240 [2024-11-07 10:49:18.690458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.240 [2024-11-07 10:49:18.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d91c0 with addr=10.0.0.2, port=4420 00:20:51.240 [2024-11-07 10:49:18.690486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d91c0 is same with the state(6) to be set 00:20:51.240 [2024-11-07 10:49:18.690752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d91c0 (9): Bad file descriptor 00:20:51.240 [2024-11-07 10:49:18.690806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:51.240 [2024-11-07 10:49:18.690817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:51.240 [2024-11-07 10:49:18.690826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:51.240 [2024-11-07 10:49:18.690836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:51.240 [2024-11-07 10:49:18.692978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:51.240 [2024-11-07 10:49:18.693028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:51.240 [2024-11-07 10:49:18.693193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.240 [2024-11-07 10:49:18.693208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29036d0 with addr=10.0.0.2, port=4420 00:20:51.240 [2024-11-07 10:49:18.693217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29036d0 is same with the state(6) to be set 00:20:51.240 [2024-11-07 10:49:18.693425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.240 [2024-11-07 10:49:18.693444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2952900 with addr=10.0.0.2, port=4420 00:20:51.240 [2024-11-07 10:49:18.693453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2952900 is same with the state(6) to be set 00:20:51.240 [2024-11-07 10:49:18.693463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29036d0 (9): Bad file descriptor 00:20:51.240 [2024-11-07 10:49:18.693499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2952900 (9): Bad file descriptor 00:20:51.241 [2024-11-07 10:49:18.693511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:51.241 [2024-11-07 10:49:18.693519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:51.241 [2024-11-07 10:49:18.693526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:51.241 [2024-11-07 10:49:18.693534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:51.241 [2024-11-07 10:49:18.693568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:51.241 [2024-11-07 10:49:18.693575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:51.241 [2024-11-07 10:49:18.693582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:51.241 [2024-11-07 10:49:18.693590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:51.241 [2024-11-07 10:49:18.698013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.241 [2024-11-07 10:49:18.698565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.241 [2024-11-07 10:49:18.698572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.698989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.698996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.699143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26de5a0 is same with the state(6) to be set 00:20:51.242 [2024-11-07 10:49:18.700161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.242 [2024-11-07 10:49:18.700268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.242 [2024-11-07 10:49:18.700277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.243 [2024-11-07 10:49:18.700956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.243 [2024-11-07 10:49:18.700964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.700972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.700979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.700988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.700995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.701270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.701278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28aead0 is same with the state(6) to be set 00:20:51.244 [2024-11-07 10:49:18.702300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.244 [2024-11-07 10:49:18.702664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.244 [2024-11-07 10:49:18.702675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.702987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.702996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.245 [2024-11-07 10:49:18.703261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.245 [2024-11-07 10:49:18.703270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.703394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.703403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28dbcc0 is same with the state(6) to be set 00:20:51.246 [2024-11-07 10:49:18.704423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.246 [2024-11-07 10:49:18.704974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.246 [2024-11-07 10:49:18.704984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.704991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.705587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.705595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28de680 is same with the state(6) to be set 00:20:51.247 [2024-11-07 10:49:18.706621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.247 [2024-11-07 10:49:18.706738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.247 [2024-11-07 10:49:18.706745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.706986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.706995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.248 [2024-11-07 10:49:18.707423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.248 [2024-11-07 10:49:18.707431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.707741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.707751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28dfc10 is same with the state(6) to be set 00:20:51.249 [2024-11-07 10:49:18.708763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.708990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.708999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.249 [2024-11-07 10:49:18.709121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.249 [2024-11-07 10:49:18.709128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.250 [2024-11-07 10:49:18.709739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.250 [2024-11-07 10:49:18.709748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.709874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.709884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x38275b0 is same with the state(6) to be set 00:20:51.251 [2024-11-07 10:49:18.710890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.710913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.710934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.710955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.710972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.710990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.251 [2024-11-07 10:49:18.711417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.251 [2024-11-07 10:49:18.711427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.711992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.252 [2024-11-07 10:49:18.711999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.252 [2024-11-07 10:49:18.712008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3ceb2e0 is same with the state(6) to be set 00:20:51.252 [2024-11-07 10:49:18.712992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:51.252 [2024-11-07 10:49:18.713010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:51.252 [2024-11-07 10:49:18.713024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:51.252 [2024-11-07 10:49:18.713035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:51.252 [2024-11-07 10:49:18.713112] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:51.252 [2024-11-07 10:49:18.713129] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:51.252 [2024-11-07 10:49:18.713139] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:51.252 [2024-11-07 10:49:18.713207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:51.252 [2024-11-07 10:49:18.713219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:51.252 task offset: 25472 on job bdev=Nvme9n1 fails 00:20:51.252 00:20:51.252 Latency(us) 00:20:51.252 [2024-11-07T09:49:18.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.252 Job: Nvme1n1 ended in about 0.91 seconds with error 00:20:51.252 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme1n1 : 0.91 210.59 13.16 70.20 0.00 225630.83 20515.62 199685.34 00:20:51.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme2n1 ended in about 0.92 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme2n1 : 0.92 213.75 13.36 69.44 0.00 219866.68 19945.74 228863.11 00:20:51.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme3n1 ended in about 0.92 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme3n1 : 0.92 207.84 12.99 69.28 0.00 220718.97 15272.74 222480.47 00:20:51.253 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme4n1 ended in about 0.93 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme4n1 : 0.93 207.37 12.96 69.12 0.00 217270.09 26214.40 218833.25 00:20:51.253 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme5n1 ended in about 0.90 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme5n1 : 0.90 212.45 13.28 70.82 0.00 207736.43 5356.86 233422.14 00:20:51.253 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme6n1 ended in about 0.93 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme6n1 : 0.93 206.88 12.93 68.96 0.00 209903.53 15614.66 242540.19 00:20:51.253 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme7n1 ended in about 0.93 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme7n1 : 0.93 210.70 13.17 68.80 0.00 203309.99 21199.47 186008.26 00:20:51.253 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme8n1 ended in about 0.93 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme8n1 : 0.93 205.93 12.87 68.64 0.00 202999.43 13164.19 227951.30 00:20:51.253 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme9n1 ended in about 0.90 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme9n1 : 0.90 212.70 13.29 70.90 0.00 191652.56 3875.17 225215.89 00:20:51.253 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.253 Job: Nvme10n1 ended in about 0.93 seconds with error 00:20:51.253 Verification LBA range: start 0x0 length 0x400 00:20:51.253 Nvme10n1 : 0.93 136.98 8.56 68.49 0.00 260873.05 17780.20 249834.63 00:20:51.253 [2024-11-07T09:49:18.924Z] =================================================================================================================== 00:20:51.253 [2024-11-07T09:49:18.924Z] Total : 2025.20 126.57 694.65 0.00 214837.07 3875.17 249834.63 00:20:51.253 [2024-11-07 10:49:18.749363] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:51.253 [2024-11-07 10:49:18.749416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:51.253 [2024-11-07 10:49:18.749698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.749718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cfe90 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.749729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cfe90 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.749909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.749921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d6c70 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.749929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d6c70 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.750016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.750028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2905110 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.750037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2905110 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.750183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.750194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ed610 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.750201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ed610 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.751822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:51.253 [2024-11-07 10:49:18.751843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:51.253 [2024-11-07 10:49:18.752000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.752016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x292afb0 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.752025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x292afb0 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.752179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.752192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x294cbd0 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.752199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x294cbd0 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.752350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.752362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2929df0 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.752370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2929df0 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.752384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cfe90 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d6c70 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2905110 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ed610 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752458] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:51.253 [2024-11-07 10:49:18.752474] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:51.253 [2024-11-07 10:49:18.752484] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:51.253 [2024-11-07 10:49:18.752496] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:51.253 [2024-11-07 10:49:18.752505] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:51.253 [2024-11-07 10:49:18.752569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:51.253 [2024-11-07 10:49:18.752759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.752773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d91c0 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.752782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d91c0 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.752862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.752874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29036d0 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.752882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29036d0 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.752892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x292afb0 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x294cbd0 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2929df0 (9): Bad file descriptor 00:20:51.253 [2024-11-07 10:49:18.752920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:51.253 [2024-11-07 10:49:18.752926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:51.253 [2024-11-07 10:49:18.752937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:51.253 [2024-11-07 10:49:18.752945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:51.253 [2024-11-07 10:49:18.752953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:51.253 [2024-11-07 10:49:18.752960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:51.253 [2024-11-07 10:49:18.752967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:51.253 [2024-11-07 10:49:18.752973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:51.253 [2024-11-07 10:49:18.752981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:51.253 [2024-11-07 10:49:18.752988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:51.253 [2024-11-07 10:49:18.752994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:51.253 [2024-11-07 10:49:18.753001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:51.253 [2024-11-07 10:49:18.753013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:51.253 [2024-11-07 10:49:18.753019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:51.253 [2024-11-07 10:49:18.753026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:51.253 [2024-11-07 10:49:18.753032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:51.253 [2024-11-07 10:49:18.753282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.253 [2024-11-07 10:49:18.753295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2952900 with addr=10.0.0.2, port=4420 00:20:51.253 [2024-11-07 10:49:18.753302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2952900 is same with the state(6) to be set 00:20:51.253 [2024-11-07 10:49:18.753312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d91c0 (9): Bad file descriptor 00:20:51.254 [2024-11-07 10:49:18.753322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29036d0 (9): Bad file descriptor 00:20:51.254 [2024-11-07 10:49:18.753330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:51.254 [2024-11-07 10:49:18.753360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:51.254 [2024-11-07 10:49:18.753387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:51.254 [2024-11-07 10:49:18.753460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2952900 (9): Bad file descriptor 00:20:51.254 [2024-11-07 10:49:18.753471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:51.254 [2024-11-07 10:49:18.753500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:51.254 [2024-11-07 10:49:18.753547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:51.254 [2024-11-07 10:49:18.753555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:51.254 [2024-11-07 10:49:18.753562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:51.254 [2024-11-07 10:49:18.753569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:51.513 10:49:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2742811 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2742811 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2742811 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.449 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.449 rmmod nvme_tcp 00:20:52.449 rmmod nvme_fabrics 00:20:52.708 rmmod nvme_keyring 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2742734 ']' 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2742734 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2742734 ']' 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2742734 00:20:52.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2742734) - No such process 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2742734 is not found' 00:20:52.708 Process with pid 2742734 is not found 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.708 10:49:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.611 00:20:54.611 real 0m7.816s 00:20:54.611 user 0m19.759s 00:20:54.611 sys 0m1.354s 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:54.611 ************************************ 00:20:54.611 END TEST nvmf_shutdown_tc3 00:20:54.611 ************************************ 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.611 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:54.870 ************************************ 00:20:54.870 START TEST nvmf_shutdown_tc4 00:20:54.870 ************************************ 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.870 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.871 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.871 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.871 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:20:55.131 00:20:55.131 --- 10.0.0.2 ping statistics --- 00:20:55.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.131 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:20:55.131 00:20:55.131 --- 10.0.0.1 ping statistics --- 00:20:55.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.131 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2744064 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2744064 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2744064 ']' 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.131 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.131 [2024-11-07 10:49:22.680894] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:20:55.131 [2024-11-07 10:49:22.680945] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.131 [2024-11-07 10:49:22.749214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.131 [2024-11-07 10:49:22.791161] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.131 [2024-11-07 10:49:22.791203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.131 [2024-11-07 10:49:22.791211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.131 [2024-11-07 10:49:22.791217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.131 [2024-11-07 10:49:22.791222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.131 [2024-11-07 10:49:22.792844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.131 [2024-11-07 10:49:22.792907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.131 [2024-11-07 10:49:22.792997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.131 [2024-11-07 10:49:22.792998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.390 [2024-11-07 10:49:22.941733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.390 10:49:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.390 Malloc1 00:20:55.390 [2024-11-07 10:49:23.052292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.648 Malloc2 00:20:55.648 Malloc3 00:20:55.648 Malloc4 00:20:55.648 Malloc5 00:20:55.648 Malloc6 00:20:55.648 Malloc7 00:20:55.907 Malloc8 00:20:55.907 Malloc9 00:20:55.907 Malloc10 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2744326 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:55.907 10:49:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:55.907 [2024-11-07 10:49:23.535778] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:01.177 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.177 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2744064 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2744064 ']' 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2744064 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2744064 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2744064' 00:21:01.178 killing process with pid 2744064 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2744064 00:21:01.178 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2744064 00:21:01.178 [2024-11-07 10:49:28.547478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21efe50 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.547828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0320 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f07f0 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f07f0 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f07f0 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f07f0 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f07f0 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.548891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ef980 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.549806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1040 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.550317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1510 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.551366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0b70 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2380 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.555633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2850 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 [2024-11-07 10:49:28.556257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f2d20 is same with the state(6) to be set 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 starting I/O failed: -6 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 Write completed with error (sct=0, sc=8) 00:21:01.178 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.562324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.562348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.562356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.562362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 starting I/O failed: -6 00:21:01.179 [2024-11-07 10:49:28.562371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.562378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.562385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f81a00 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.562505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.179 starting I/O failed: -6 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.563336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.563358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 starting I/O failed: -6 00:21:01.179 [2024-11-07 10:49:28.563367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.563375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.563382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.563389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.563396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with starting I/O failed: -6 00:21:01.179 the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.563403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 [2024-11-07 10:49:28.563409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f36c0 is same with the state(6) to be set 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 [2024-11-07 10:49:28.563536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.179 NVMe io qpair process completion error 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 [2024-11-07 10:49:28.564533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.179 starting I/O failed: -6 00:21:01.179 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.565317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f32a0 is same with the state(6) to be set 00:21:01.180 starting I/O failed: -6 00:21:01.180 [2024-11-07 10:49:28.565332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f32a0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.565340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f32a0 is same with Write completed with error (sct=0, sc=8) 00:21:01.180 the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.565349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f32a0 is same with the state(6) to be set 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.565357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f32a0 is same with the state(6) to be set 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.565397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.566441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.180 [2024-11-07 10:49:28.566448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.566536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 starting I/O failed: -6 00:21:01.180 [2024-11-07 10:49:28.566544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7fca0 is same with the state(6) to be set 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 starting I/O failed: -6 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.566854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with starting I/O failed: -6 00:21:01.180 the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with the state(6) to be set 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.180 [2024-11-07 10:49:28.566884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with the state(6) to be set 00:21:01.180 starting I/O failed: -6 00:21:01.180 [2024-11-07 10:49:28.566891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with the state(6) to be set 00:21:01.180 [2024-11-07 10:49:28.566898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with the state(6) to be set 00:21:01.180 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.566905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80170 is same with the state(6) to be set 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.567285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with starting I/O failed: -6 00:21:01.181 the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.567304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.567311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.567324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.567331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.567345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80660 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.567637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.567653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 [2024-11-07 10:49:28.567659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7f7d0 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.567893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.181 NVMe io qpair process completion error 00:21:01.181 [2024-11-07 10:49:28.568192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 [2024-11-07 10:49:28.568255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f7edf0 is same with the state(6) to be set 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 [2024-11-07 10:49:28.568926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.181 starting I/O failed: -6 00:21:01.181 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 [2024-11-07 10:49:28.569843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 [2024-11-07 10:49:28.570850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.182 Write completed with error (sct=0, sc=8) 00:21:01.182 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 [2024-11-07 10:49:28.572685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.183 NVMe io qpair process completion error 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.183 Write completed with error (sct=0, sc=8) 00:21:01.183 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 [2024-11-07 10:49:28.575134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 [2024-11-07 10:49:28.577213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.184 NVMe io qpair process completion error 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 starting I/O failed: -6 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.184 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.185 Write completed with error (sct=0, sc=8) 00:21:01.185 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 [2024-11-07 10:49:28.585423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.186 NVMe io qpair process completion error 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 [2024-11-07 10:49:28.586453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 [2024-11-07 10:49:28.587335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.186 [2024-11-07 10:49:28.588370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.186 Write completed with error (sct=0, sc=8) 00:21:01.186 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 [2024-11-07 10:49:28.591735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.187 NVMe io qpair process completion error 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 [2024-11-07 10:49:28.593014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 starting I/O failed: -6 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.187 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 [2024-11-07 10:49:28.593885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 [2024-11-07 10:49:28.594959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.188 starting I/O failed: -6 00:21:01.188 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 [2024-11-07 10:49:28.596776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.189 NVMe io qpair process completion error 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 [2024-11-07 10:49:28.597770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 [2024-11-07 10:49:28.598656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.189 starting I/O failed: -6 00:21:01.189 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 [2024-11-07 10:49:28.599707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 [2024-11-07 10:49:28.605588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.190 NVMe io qpair process completion error 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 [2024-11-07 10:49:28.606600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 starting I/O failed: -6 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.190 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 [2024-11-07 10:49:28.607500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 [2024-11-07 10:49:28.608553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.191 Write completed with error (sct=0, sc=8) 00:21:01.191 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 [2024-11-07 10:49:28.613025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.192 NVMe io qpair process completion error 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 [2024-11-07 10:49:28.614075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 [2024-11-07 10:49:28.615004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.192 Write completed with error (sct=0, sc=8) 00:21:01.192 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 [2024-11-07 10:49:28.616062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 starting I/O failed: -6 00:21:01.193 [2024-11-07 10:49:28.618069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:01.193 NVMe io qpair process completion error 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.193 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Write completed with error (sct=0, sc=8) 00:21:01.194 Initializing NVMe Controllers 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:01.194 Controller IO queue size 128, less than required. 00:21:01.194 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:01.194 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:01.194 Initialization complete. Launching workers. 00:21:01.194 ======================================================== 00:21:01.194 Latency(us) 00:21:01.194 Device Information : IOPS MiB/s Average min max 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2211.06 95.01 57895.66 913.76 110860.43 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2133.87 91.69 60053.77 943.04 136569.04 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2101.99 90.32 61013.47 746.95 125336.52 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2088.14 89.72 61259.29 602.62 106080.87 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2108.28 90.59 60106.26 914.59 105578.76 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2127.37 91.41 59576.54 728.74 104664.85 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2167.43 93.13 58483.06 678.17 103869.37 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2116.67 90.95 59894.21 861.78 102495.54 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2207.49 94.85 57491.40 921.32 103410.37 00:21:01.194 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2201.83 94.61 57672.88 883.63 101844.94 00:21:01.194 ======================================================== 00:21:01.194 Total : 21464.13 922.29 59318.47 602.62 136569.04 00:21:01.194 00:21:01.194 [2024-11-07 10:49:28.627618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621410 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621a70 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x621740 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620bc0 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620890 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x622ae0 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x622720 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x622900 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620560 is same with the state(6) to be set 00:21:01.194 [2024-11-07 10:49:28.627921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x620ef0 is same with the state(6) to be set 00:21:01.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:01.453 10:49:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2744326 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2744326 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2744326 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:02.388 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:02.389 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:02.389 10:49:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:02.389 rmmod nvme_tcp 00:21:02.389 rmmod nvme_fabrics 00:21:02.389 rmmod nvme_keyring 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2744064 ']' 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2744064 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2744064 ']' 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2744064 00:21:02.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2744064) - No such process 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2744064 is not found' 00:21:02.389 Process with pid 2744064 is not found 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.389 10:49:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:04.921 00:21:04.921 real 0m9.800s 00:21:04.921 user 0m24.878s 00:21:04.921 sys 0m5.291s 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 ************************************ 00:21:04.921 END TEST nvmf_shutdown_tc4 00:21:04.921 ************************************ 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:04.921 00:21:04.921 real 0m40.399s 00:21:04.921 user 1m39.790s 00:21:04.921 sys 0m13.878s 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 ************************************ 00:21:04.921 END TEST nvmf_shutdown 00:21:04.921 ************************************ 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.921 ************************************ 00:21:04.921 START TEST nvmf_nsid 00:21:04.921 ************************************ 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:04.921 * Looking for test storage... 00:21:04.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:04.921 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:04.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.922 --rc genhtml_branch_coverage=1 00:21:04.922 --rc genhtml_function_coverage=1 00:21:04.922 --rc genhtml_legend=1 00:21:04.922 --rc geninfo_all_blocks=1 00:21:04.922 --rc geninfo_unexecuted_blocks=1 00:21:04.922 00:21:04.922 ' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:04.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.922 --rc genhtml_branch_coverage=1 00:21:04.922 --rc genhtml_function_coverage=1 00:21:04.922 --rc genhtml_legend=1 00:21:04.922 --rc geninfo_all_blocks=1 00:21:04.922 --rc geninfo_unexecuted_blocks=1 00:21:04.922 00:21:04.922 ' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:04.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.922 --rc genhtml_branch_coverage=1 00:21:04.922 --rc genhtml_function_coverage=1 00:21:04.922 --rc genhtml_legend=1 00:21:04.922 --rc geninfo_all_blocks=1 00:21:04.922 --rc geninfo_unexecuted_blocks=1 00:21:04.922 00:21:04.922 ' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:04.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.922 --rc genhtml_branch_coverage=1 00:21:04.922 --rc genhtml_function_coverage=1 00:21:04.922 --rc genhtml_legend=1 00:21:04.922 --rc geninfo_all_blocks=1 00:21:04.922 --rc geninfo_unexecuted_blocks=1 00:21:04.922 00:21:04.922 ' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.922 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.923 10:49:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.191 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.191 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.191 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.191 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:21:10.191 00:21:10.191 --- 10.0.0.2 ping statistics --- 00:21:10.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.191 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:21:10.191 00:21:10.191 --- 10.0.0.1 ping statistics --- 00:21:10.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.191 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2748780 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2748780 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2748780 ']' 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.191 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.191 [2024-11-07 10:49:37.735152] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:10.191 [2024-11-07 10:49:37.735204] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.191 [2024-11-07 10:49:37.801565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.191 [2024-11-07 10:49:37.843381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.191 [2024-11-07 10:49:37.843416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.191 [2024-11-07 10:49:37.843424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.191 [2024-11-07 10:49:37.843430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.191 [2024-11-07 10:49:37.843439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.191 [2024-11-07 10:49:37.844008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2748805 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=16b6b761-bbfc-4f2b-969d-ba2ab2fbde1f 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=bcd2128b-b9bc-4bfc-bb1b-b98beee0d4ef 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e738a8d0-04a8-4081-8b03-bf5e5ca7925d 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.450 10:49:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.450 null0 00:21:10.450 null1 00:21:10.450 [2024-11-07 10:49:38.022910] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:10.450 [2024-11-07 10:49:38.022952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2748805 ] 00:21:10.450 null2 00:21:10.450 [2024-11-07 10:49:38.031796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.450 [2024-11-07 10:49:38.056001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.450 [2024-11-07 10:49:38.084832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2748805 /var/tmp/tgt2.sock 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2748805 ']' 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.450 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:10.708 [2024-11-07 10:49:38.131437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.708 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.708 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:10.708 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:11.275 [2024-11-07 10:49:38.658849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.275 [2024-11-07 10:49:38.674961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:11.275 nvme0n1 nvme0n2 00:21:11.275 nvme1n1 00:21:11.275 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:11.275 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:11.275 10:49:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:21:12.222 10:49:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 16b6b761-bbfc-4f2b-969d-ba2ab2fbde1f 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:13.156 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=16b6b761bbfc4f2b969dba2ab2fbde1f 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 16B6B761BBFC4F2B969DBA2AB2FBDE1F 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 16B6B761BBFC4F2B969DBA2AB2FBDE1F == \1\6\B\6\B\7\6\1\B\B\F\C\4\F\2\B\9\6\9\D\B\A\2\A\B\2\F\B\D\E\1\F ]] 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid bcd2128b-b9bc-4bfc-bb1b-b98beee0d4ef 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bcd2128bb9bc4bfcbb1bb98beee0d4ef 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BCD2128BB9BC4BFCBB1BB98BEEE0D4EF 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ BCD2128BB9BC4BFCBB1BB98BEEE0D4EF == \B\C\D\2\1\2\8\B\B\9\B\C\4\B\F\C\B\B\1\B\B\9\8\B\E\E\E\0\D\4\E\F ]] 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e738a8d0-04a8-4081-8b03-bf5e5ca7925d 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e738a8d004a840818b03bf5e5ca7925d 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E738A8D004A840818B03BF5E5CA7925D 00:21:13.414 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E738A8D004A840818B03BF5E5CA7925D == \E\7\3\8\A\8\D\0\0\4\A\8\4\0\8\1\8\B\0\3\B\F\5\E\5\C\A\7\9\2\5\D ]] 00:21:13.415 10:49:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2748805 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2748805 ']' 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2748805 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2748805 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2748805' 00:21:13.673 killing process with pid 2748805 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2748805 00:21:13.673 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2748805 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.931 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.931 rmmod nvme_tcp 00:21:13.931 rmmod nvme_fabrics 00:21:13.931 rmmod nvme_keyring 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2748780 ']' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2748780 ']' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2748780' 00:21:14.190 killing process with pid 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2748780 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.190 10:49:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.725 00:21:16.725 real 0m11.663s 00:21:16.725 user 0m9.318s 00:21:16.725 sys 0m5.071s 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:16.725 ************************************ 00:21:16.725 END TEST nvmf_nsid 00:21:16.725 ************************************ 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:16.725 00:21:16.725 real 11m43.057s 00:21:16.725 user 25m15.229s 00:21:16.725 sys 3m36.418s 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:16.725 10:49:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.725 ************************************ 00:21:16.725 END TEST nvmf_target_extra 00:21:16.725 ************************************ 00:21:16.725 10:49:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:16.725 10:49:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.725 10:49:43 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.725 10:49:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.725 ************************************ 00:21:16.725 START TEST nvmf_host 00:21:16.725 ************************************ 00:21:16.725 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:16.725 * Looking for test storage... 00:21:16.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:16.725 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.725 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.725 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.726 --rc genhtml_branch_coverage=1 00:21:16.726 --rc genhtml_function_coverage=1 00:21:16.726 --rc genhtml_legend=1 00:21:16.726 --rc geninfo_all_blocks=1 00:21:16.726 --rc geninfo_unexecuted_blocks=1 00:21:16.726 00:21:16.726 ' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.726 --rc genhtml_branch_coverage=1 00:21:16.726 --rc genhtml_function_coverage=1 00:21:16.726 --rc genhtml_legend=1 00:21:16.726 --rc geninfo_all_blocks=1 00:21:16.726 --rc geninfo_unexecuted_blocks=1 00:21:16.726 00:21:16.726 ' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.726 --rc genhtml_branch_coverage=1 00:21:16.726 --rc genhtml_function_coverage=1 00:21:16.726 --rc genhtml_legend=1 00:21:16.726 --rc geninfo_all_blocks=1 00:21:16.726 --rc geninfo_unexecuted_blocks=1 00:21:16.726 00:21:16.726 ' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.726 --rc genhtml_branch_coverage=1 00:21:16.726 --rc genhtml_function_coverage=1 00:21:16.726 --rc genhtml_legend=1 00:21:16.726 --rc geninfo_all_blocks=1 00:21:16.726 --rc geninfo_unexecuted_blocks=1 00:21:16.726 00:21:16.726 ' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.726 ************************************ 00:21:16.726 START TEST nvmf_multicontroller 00:21:16.726 ************************************ 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:16.726 * Looking for test storage... 00:21:16.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:16.726 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.986 --rc genhtml_branch_coverage=1 00:21:16.986 --rc genhtml_function_coverage=1 00:21:16.986 --rc genhtml_legend=1 00:21:16.986 --rc geninfo_all_blocks=1 00:21:16.986 --rc geninfo_unexecuted_blocks=1 00:21:16.986 00:21:16.986 ' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.986 --rc genhtml_branch_coverage=1 00:21:16.986 --rc genhtml_function_coverage=1 00:21:16.986 --rc genhtml_legend=1 00:21:16.986 --rc geninfo_all_blocks=1 00:21:16.986 --rc geninfo_unexecuted_blocks=1 00:21:16.986 00:21:16.986 ' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.986 --rc genhtml_branch_coverage=1 00:21:16.986 --rc genhtml_function_coverage=1 00:21:16.986 --rc genhtml_legend=1 00:21:16.986 --rc geninfo_all_blocks=1 00:21:16.986 --rc geninfo_unexecuted_blocks=1 00:21:16.986 00:21:16.986 ' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:16.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.986 --rc genhtml_branch_coverage=1 00:21:16.986 --rc genhtml_function_coverage=1 00:21:16.986 --rc genhtml_legend=1 00:21:16.986 --rc geninfo_all_blocks=1 00:21:16.986 --rc geninfo_unexecuted_blocks=1 00:21:16.986 00:21:16.986 ' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.986 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:16.987 10:49:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.256 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:22.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:22.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:22.257 Found net devices under 0000:86:00.0: cvl_0_0 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:22.257 Found net devices under 0000:86:00.1: cvl_0_1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:21:22.257 00:21:22.257 --- 10.0.0.2 ping statistics --- 00:21:22.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.257 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:21:22.257 00:21:22.257 --- 10.0.0.1 ping statistics --- 00:21:22.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.257 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.257 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2753011 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2753011 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2753011 ']' 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:22.518 10:49:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.518 [2024-11-07 10:49:49.993497] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:22.518 [2024-11-07 10:49:49.993545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.518 [2024-11-07 10:49:50.062888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:22.518 [2024-11-07 10:49:50.111233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.518 [2024-11-07 10:49:50.111270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.518 [2024-11-07 10:49:50.111277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.518 [2024-11-07 10:49:50.111283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.518 [2024-11-07 10:49:50.111289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.518 [2024-11-07 10:49:50.112652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.518 [2024-11-07 10:49:50.112673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:22.518 [2024-11-07 10:49:50.112675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 [2024-11-07 10:49:50.252066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 Malloc0 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 [2024-11-07 10:49:50.311159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 [2024-11-07 10:49:50.319075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 Malloc1 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2753136 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2753136 /var/tmp/bdevperf.sock 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2753136 ']' 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:22.777 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.036 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:23.036 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:23.036 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:23.036 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.036 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.294 NVMe0n1 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.294 1 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.294 request: 00:21:23.294 { 00:21:23.294 "name": "NVMe0", 00:21:23.294 "trtype": "tcp", 00:21:23.294 "traddr": "10.0.0.2", 00:21:23.294 "adrfam": "ipv4", 00:21:23.294 "trsvcid": "4420", 00:21:23.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.294 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:23.294 "hostaddr": "10.0.0.1", 00:21:23.294 "prchk_reftag": false, 00:21:23.294 "prchk_guard": false, 00:21:23.294 "hdgst": false, 00:21:23.294 "ddgst": false, 00:21:23.294 "allow_unrecognized_csi": false, 00:21:23.294 "method": "bdev_nvme_attach_controller", 00:21:23.294 "req_id": 1 00:21:23.294 } 00:21:23.294 Got JSON-RPC error response 00:21:23.294 response: 00:21:23.294 { 00:21:23.294 "code": -114, 00:21:23.294 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.294 } 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.294 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.295 request: 00:21:23.295 { 00:21:23.295 "name": "NVMe0", 00:21:23.295 "trtype": "tcp", 00:21:23.295 "traddr": "10.0.0.2", 00:21:23.295 "adrfam": "ipv4", 00:21:23.295 "trsvcid": "4420", 00:21:23.295 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.295 "hostaddr": "10.0.0.1", 00:21:23.295 "prchk_reftag": false, 00:21:23.295 "prchk_guard": false, 00:21:23.295 "hdgst": false, 00:21:23.295 "ddgst": false, 00:21:23.295 "allow_unrecognized_csi": false, 00:21:23.295 "method": "bdev_nvme_attach_controller", 00:21:23.295 "req_id": 1 00:21:23.295 } 00:21:23.295 Got JSON-RPC error response 00:21:23.295 response: 00:21:23.295 { 00:21:23.295 "code": -114, 00:21:23.295 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.295 } 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.295 request: 00:21:23.295 { 00:21:23.295 "name": "NVMe0", 00:21:23.295 "trtype": "tcp", 00:21:23.295 "traddr": "10.0.0.2", 00:21:23.295 "adrfam": "ipv4", 00:21:23.295 "trsvcid": "4420", 00:21:23.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.295 "hostaddr": "10.0.0.1", 00:21:23.295 "prchk_reftag": false, 00:21:23.295 "prchk_guard": false, 00:21:23.295 "hdgst": false, 00:21:23.295 "ddgst": false, 00:21:23.295 "multipath": "disable", 00:21:23.295 "allow_unrecognized_csi": false, 00:21:23.295 "method": "bdev_nvme_attach_controller", 00:21:23.295 "req_id": 1 00:21:23.295 } 00:21:23.295 Got JSON-RPC error response 00:21:23.295 response: 00:21:23.295 { 00:21:23.295 "code": -114, 00:21:23.295 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:23.295 } 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.295 request: 00:21:23.295 { 00:21:23.295 "name": "NVMe0", 00:21:23.295 "trtype": "tcp", 00:21:23.295 "traddr": "10.0.0.2", 00:21:23.295 "adrfam": "ipv4", 00:21:23.295 "trsvcid": "4420", 00:21:23.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.295 "hostaddr": "10.0.0.1", 00:21:23.295 "prchk_reftag": false, 00:21:23.295 "prchk_guard": false, 00:21:23.295 "hdgst": false, 00:21:23.295 "ddgst": false, 00:21:23.295 "multipath": "failover", 00:21:23.295 "allow_unrecognized_csi": false, 00:21:23.295 "method": "bdev_nvme_attach_controller", 00:21:23.295 "req_id": 1 00:21:23.295 } 00:21:23.295 Got JSON-RPC error response 00:21:23.295 response: 00:21:23.295 { 00:21:23.295 "code": -114, 00:21:23.295 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:23.295 } 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.295 10:49:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.553 NVMe0n1 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.553 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.812 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:23.812 10:49:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.186 { 00:21:25.186 "results": [ 00:21:25.186 { 00:21:25.186 "job": "NVMe0n1", 00:21:25.186 "core_mask": "0x1", 00:21:25.186 "workload": "write", 00:21:25.186 "status": "finished", 00:21:25.186 "queue_depth": 128, 00:21:25.186 "io_size": 4096, 00:21:25.186 "runtime": 1.008102, 00:21:25.186 "iops": 24064.03320298938, 00:21:25.186 "mibps": 94.00012969917726, 00:21:25.186 "io_failed": 0, 00:21:25.186 "io_timeout": 0, 00:21:25.186 "avg_latency_us": 5311.544234412329, 00:21:25.186 "min_latency_us": 5043.422608695652, 00:21:25.186 "max_latency_us": 12081.419130434782 00:21:25.186 } 00:21:25.186 ], 00:21:25.186 "core_count": 1 00:21:25.186 } 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2753136 ']' 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2753136' 00:21:25.186 killing process with pid 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2753136 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:25.186 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:25.186 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.186 [2024-11-07 10:49:50.424330] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:25.186 [2024-11-07 10:49:50.424384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753136 ] 00:21:25.186 [2024-11-07 10:49:50.487521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.186 [2024-11-07 10:49:50.530538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.186 [2024-11-07 10:49:51.305924] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 069dce79-0953-4247-9fe2-2f334ab72507 already exists 00:21:25.186 [2024-11-07 10:49:51.305956] bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:069dce79-0953-4247-9fe2-2f334ab72507 alias for bdev NVMe1n1 00:21:25.186 [2024-11-07 10:49:51.305965] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:25.186 Running I/O for 1 seconds... 00:21:25.186 24004.00 IOPS, 93.77 MiB/s 00:21:25.186 Latency(us) 00:21:25.186 [2024-11-07T09:49:52.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.186 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:25.186 NVMe0n1 : 1.01 24064.03 94.00 0.00 0.00 5311.54 5043.42 12081.42 00:21:25.186 [2024-11-07T09:49:52.857Z] =================================================================================================================== 00:21:25.186 [2024-11-07T09:49:52.858Z] Total : 24064.03 94.00 0.00 0.00 5311.54 5043.42 12081.42 00:21:25.187 Received shutdown signal, test time was about 1.000000 seconds 00:21:25.187 00:21:25.187 Latency(us) 00:21:25.187 [2024-11-07T09:49:52.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.187 [2024-11-07T09:49:52.858Z] =================================================================================================================== 00:21:25.187 [2024-11-07T09:49:52.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:25.187 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.187 rmmod nvme_tcp 00:21:25.187 rmmod nvme_fabrics 00:21:25.187 rmmod nvme_keyring 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2753011 ']' 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2753011 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2753011 ']' 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2753011 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2753011 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2753011' 00:21:25.187 killing process with pid 2753011 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2753011 00:21:25.187 10:49:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2753011 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.446 10:49:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.978 00:21:27.978 real 0m10.882s 00:21:27.978 user 0m12.855s 00:21:27.978 sys 0m4.826s 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.978 ************************************ 00:21:27.978 END TEST nvmf_multicontroller 00:21:27.978 ************************************ 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.978 ************************************ 00:21:27.978 START TEST nvmf_aer 00:21:27.978 ************************************ 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:27.978 * Looking for test storage... 00:21:27.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:27.978 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.979 --rc genhtml_branch_coverage=1 00:21:27.979 --rc genhtml_function_coverage=1 00:21:27.979 --rc genhtml_legend=1 00:21:27.979 --rc geninfo_all_blocks=1 00:21:27.979 --rc geninfo_unexecuted_blocks=1 00:21:27.979 00:21:27.979 ' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.979 --rc genhtml_branch_coverage=1 00:21:27.979 --rc genhtml_function_coverage=1 00:21:27.979 --rc genhtml_legend=1 00:21:27.979 --rc geninfo_all_blocks=1 00:21:27.979 --rc geninfo_unexecuted_blocks=1 00:21:27.979 00:21:27.979 ' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.979 --rc genhtml_branch_coverage=1 00:21:27.979 --rc genhtml_function_coverage=1 00:21:27.979 --rc genhtml_legend=1 00:21:27.979 --rc geninfo_all_blocks=1 00:21:27.979 --rc geninfo_unexecuted_blocks=1 00:21:27.979 00:21:27.979 ' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:27.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.979 --rc genhtml_branch_coverage=1 00:21:27.979 --rc genhtml_function_coverage=1 00:21:27.979 --rc genhtml_legend=1 00:21:27.979 --rc geninfo_all_blocks=1 00:21:27.979 --rc geninfo_unexecuted_blocks=1 00:21:27.979 00:21:27.979 ' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.979 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.979 10:49:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.317 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.318 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.318 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:33.318 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:33.624 10:50:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:33.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:21:33.624 00:21:33.624 --- 10.0.0.2 ping statistics --- 00:21:33.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.624 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:33.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:33.624 00:21:33.624 --- 10.0.0.1 ping statistics --- 00:21:33.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.624 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2756951 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2756951 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2756951 ']' 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:33.624 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.624 [2024-11-07 10:50:01.107931] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:33.624 [2024-11-07 10:50:01.107978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.624 [2024-11-07 10:50:01.172766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.624 [2024-11-07 10:50:01.216833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.624 [2024-11-07 10:50:01.216870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.624 [2024-11-07 10:50:01.216876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.624 [2024-11-07 10:50:01.216883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.624 [2024-11-07 10:50:01.216888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.624 [2024-11-07 10:50:01.218358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.625 [2024-11-07 10:50:01.218375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.625 [2024-11-07 10:50:01.218474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.625 [2024-11-07 10:50:01.218479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 [2024-11-07 10:50:01.364550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 Malloc0 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 [2024-11-07 10:50:01.426037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.893 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:33.893 [ 00:21:33.893 { 00:21:33.893 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:33.893 "subtype": "Discovery", 00:21:33.893 "listen_addresses": [], 00:21:33.893 "allow_any_host": true, 00:21:33.894 "hosts": [] 00:21:33.894 }, 00:21:33.894 { 00:21:33.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.894 "subtype": "NVMe", 00:21:33.894 "listen_addresses": [ 00:21:33.894 { 00:21:33.894 "trtype": "TCP", 00:21:33.894 "adrfam": "IPv4", 00:21:33.894 "traddr": "10.0.0.2", 00:21:33.894 "trsvcid": "4420" 00:21:33.894 } 00:21:33.894 ], 00:21:33.894 "allow_any_host": true, 00:21:33.894 "hosts": [], 00:21:33.894 "serial_number": "SPDK00000000000001", 00:21:33.894 "model_number": "SPDK bdev Controller", 00:21:33.894 "max_namespaces": 2, 00:21:33.894 "min_cntlid": 1, 00:21:33.894 "max_cntlid": 65519, 00:21:33.894 "namespaces": [ 00:21:33.894 { 00:21:33.894 "nsid": 1, 00:21:33.894 "bdev_name": "Malloc0", 00:21:33.894 "name": "Malloc0", 00:21:33.894 "nguid": "818EDA8A4EB64CE79FFD7FB60FE217DB", 00:21:33.894 "uuid": "818eda8a-4eb6-4ce7-9ffd-7fb60fe217db" 00:21:33.894 } 00:21:33.894 ] 00:21:33.894 } 00:21:33.894 ] 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2757156 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:33.894 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 Malloc1 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 Asynchronous Event Request test 00:21:34.152 Attaching to 10.0.0.2 00:21:34.152 Attached to 10.0.0.2 00:21:34.152 Registering asynchronous event callbacks... 00:21:34.152 Starting namespace attribute notice tests for all controllers... 00:21:34.152 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:34.152 aer_cb - Changed Namespace 00:21:34.152 Cleaning up... 00:21:34.152 [ 00:21:34.152 { 00:21:34.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:34.152 "subtype": "Discovery", 00:21:34.152 "listen_addresses": [], 00:21:34.152 "allow_any_host": true, 00:21:34.152 "hosts": [] 00:21:34.152 }, 00:21:34.152 { 00:21:34.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.152 "subtype": "NVMe", 00:21:34.152 "listen_addresses": [ 00:21:34.152 { 00:21:34.152 "trtype": "TCP", 00:21:34.152 "adrfam": "IPv4", 00:21:34.152 "traddr": "10.0.0.2", 00:21:34.152 "trsvcid": "4420" 00:21:34.152 } 00:21:34.152 ], 00:21:34.152 "allow_any_host": true, 00:21:34.152 "hosts": [], 00:21:34.152 "serial_number": "SPDK00000000000001", 00:21:34.152 "model_number": "SPDK bdev Controller", 00:21:34.152 "max_namespaces": 2, 00:21:34.152 "min_cntlid": 1, 00:21:34.152 "max_cntlid": 65519, 00:21:34.152 "namespaces": [ 00:21:34.152 { 00:21:34.152 "nsid": 1, 00:21:34.152 "bdev_name": "Malloc0", 00:21:34.152 "name": "Malloc0", 00:21:34.152 "nguid": "818EDA8A4EB64CE79FFD7FB60FE217DB", 00:21:34.152 "uuid": "818eda8a-4eb6-4ce7-9ffd-7fb60fe217db" 00:21:34.152 }, 00:21:34.152 { 00:21:34.152 "nsid": 2, 00:21:34.152 "bdev_name": "Malloc1", 00:21:34.152 "name": "Malloc1", 00:21:34.152 "nguid": "DDA20AD5E50D4FAD9F2D59640AD50B42", 00:21:34.152 "uuid": "dda20ad5-e50d-4fad-9f2d-59640ad50b42" 00:21:34.152 } 00:21:34.152 ] 00:21:34.152 } 00:21:34.152 ] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2757156 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:34.152 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:34.152 rmmod nvme_tcp 00:21:34.152 rmmod nvme_fabrics 00:21:34.411 rmmod nvme_keyring 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2756951 ']' 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2756951 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2756951 ']' 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2756951 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2756951 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2756951' 00:21:34.411 killing process with pid 2756951 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2756951 00:21:34.411 10:50:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2756951 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:34.411 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:34.669 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:34.669 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:34.669 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.669 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.669 10:50:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.586 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:36.586 00:21:36.586 real 0m8.933s 00:21:36.586 user 0m5.065s 00:21:36.586 sys 0m4.617s 00:21:36.586 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:36.586 10:50:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:36.586 ************************************ 00:21:36.586 END TEST nvmf_aer 00:21:36.586 ************************************ 00:21:36.586 10:50:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:36.586 10:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:36.587 10:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:36.587 10:50:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.587 ************************************ 00:21:36.587 START TEST nvmf_async_init 00:21:36.587 ************************************ 00:21:36.587 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:36.845 * Looking for test storage... 00:21:36.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.845 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.846 --rc genhtml_branch_coverage=1 00:21:36.846 --rc genhtml_function_coverage=1 00:21:36.846 --rc genhtml_legend=1 00:21:36.846 --rc geninfo_all_blocks=1 00:21:36.846 --rc geninfo_unexecuted_blocks=1 00:21:36.846 00:21:36.846 ' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.846 --rc genhtml_branch_coverage=1 00:21:36.846 --rc genhtml_function_coverage=1 00:21:36.846 --rc genhtml_legend=1 00:21:36.846 --rc geninfo_all_blocks=1 00:21:36.846 --rc geninfo_unexecuted_blocks=1 00:21:36.846 00:21:36.846 ' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.846 --rc genhtml_branch_coverage=1 00:21:36.846 --rc genhtml_function_coverage=1 00:21:36.846 --rc genhtml_legend=1 00:21:36.846 --rc geninfo_all_blocks=1 00:21:36.846 --rc geninfo_unexecuted_blocks=1 00:21:36.846 00:21:36.846 ' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:36.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.846 --rc genhtml_branch_coverage=1 00:21:36.846 --rc genhtml_function_coverage=1 00:21:36.846 --rc genhtml_legend=1 00:21:36.846 --rc geninfo_all_blocks=1 00:21:36.846 --rc geninfo_unexecuted_blocks=1 00:21:36.846 00:21:36.846 ' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:36.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=94165e630e134907aebd6c4f32f6c6dc 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.846 10:50:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.113 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.113 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.113 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.113 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.113 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:21:42.113 00:21:42.114 --- 10.0.0.2 ping statistics --- 00:21:42.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.114 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:42.114 00:21:42.114 --- 10.0.0.1 ping statistics --- 00:21:42.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.114 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2760548 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2760548 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2760548 ']' 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:42.114 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.114 [2024-11-07 10:50:09.645481] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:42.114 [2024-11-07 10:50:09.645528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.114 [2024-11-07 10:50:09.712261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.114 [2024-11-07 10:50:09.753962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.114 [2024-11-07 10:50:09.753999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.114 [2024-11-07 10:50:09.754007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.114 [2024-11-07 10:50:09.754013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.114 [2024-11-07 10:50:09.754018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.114 [2024-11-07 10:50:09.754581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 [2024-11-07 10:50:09.881186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 null0 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 94165e630e134907aebd6c4f32f6c6dc 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.375 [2024-11-07 10:50:09.921417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.375 10:50:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.634 nvme0n1 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.634 [ 00:21:42.634 { 00:21:42.634 "name": "nvme0n1", 00:21:42.634 "aliases": [ 00:21:42.634 "94165e63-0e13-4907-aebd-6c4f32f6c6dc" 00:21:42.634 ], 00:21:42.634 "product_name": "NVMe disk", 00:21:42.634 "block_size": 512, 00:21:42.634 "num_blocks": 2097152, 00:21:42.634 "uuid": "94165e63-0e13-4907-aebd-6c4f32f6c6dc", 00:21:42.634 "numa_id": 1, 00:21:42.634 "assigned_rate_limits": { 00:21:42.634 "rw_ios_per_sec": 0, 00:21:42.634 "rw_mbytes_per_sec": 0, 00:21:42.634 "r_mbytes_per_sec": 0, 00:21:42.634 "w_mbytes_per_sec": 0 00:21:42.634 }, 00:21:42.634 "claimed": false, 00:21:42.634 "zoned": false, 00:21:42.634 "supported_io_types": { 00:21:42.634 "read": true, 00:21:42.634 "write": true, 00:21:42.634 "unmap": false, 00:21:42.634 "flush": true, 00:21:42.634 "reset": true, 00:21:42.634 "nvme_admin": true, 00:21:42.634 "nvme_io": true, 00:21:42.634 "nvme_io_md": false, 00:21:42.634 "write_zeroes": true, 00:21:42.634 "zcopy": false, 00:21:42.634 "get_zone_info": false, 00:21:42.634 "zone_management": false, 00:21:42.634 "zone_append": false, 00:21:42.634 "compare": true, 00:21:42.634 "compare_and_write": true, 00:21:42.634 "abort": true, 00:21:42.634 "seek_hole": false, 00:21:42.634 "seek_data": false, 00:21:42.634 "copy": true, 00:21:42.634 "nvme_iov_md": false 00:21:42.634 }, 00:21:42.634 "memory_domains": [ 00:21:42.634 { 00:21:42.634 "dma_device_id": "system", 00:21:42.634 "dma_device_type": 1 00:21:42.634 } 00:21:42.634 ], 00:21:42.634 "driver_specific": { 00:21:42.634 "nvme": [ 00:21:42.634 { 00:21:42.634 "trid": { 00:21:42.634 "trtype": "TCP", 00:21:42.634 "adrfam": "IPv4", 00:21:42.634 "traddr": "10.0.0.2", 00:21:42.634 "trsvcid": "4420", 00:21:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:42.634 }, 00:21:42.634 "ctrlr_data": { 00:21:42.634 "cntlid": 1, 00:21:42.634 "vendor_id": "0x8086", 00:21:42.634 "model_number": "SPDK bdev Controller", 00:21:42.634 "serial_number": "00000000000000000000", 00:21:42.634 "firmware_revision": "25.01", 00:21:42.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.634 "oacs": { 00:21:42.634 "security": 0, 00:21:42.634 "format": 0, 00:21:42.634 "firmware": 0, 00:21:42.634 "ns_manage": 0 00:21:42.634 }, 00:21:42.634 "multi_ctrlr": true, 00:21:42.634 "ana_reporting": false 00:21:42.634 }, 00:21:42.634 "vs": { 00:21:42.634 "nvme_version": "1.3" 00:21:42.634 }, 00:21:42.634 "ns_data": { 00:21:42.634 "id": 1, 00:21:42.634 "can_share": true 00:21:42.634 } 00:21:42.634 } 00:21:42.634 ], 00:21:42.634 "mp_policy": "active_passive" 00:21:42.634 } 00:21:42.634 } 00:21:42.634 ] 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.634 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.634 [2024-11-07 10:50:10.169918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:42.634 [2024-11-07 10:50:10.169977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e6fa0 (9): Bad file descriptor 00:21:42.634 [2024-11-07 10:50:10.301519] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:42.894 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:42.894 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 [ 00:21:42.894 { 00:21:42.894 "name": "nvme0n1", 00:21:42.894 "aliases": [ 00:21:42.894 "94165e63-0e13-4907-aebd-6c4f32f6c6dc" 00:21:42.894 ], 00:21:42.894 "product_name": "NVMe disk", 00:21:42.894 "block_size": 512, 00:21:42.894 "num_blocks": 2097152, 00:21:42.894 "uuid": "94165e63-0e13-4907-aebd-6c4f32f6c6dc", 00:21:42.894 "numa_id": 1, 00:21:42.894 "assigned_rate_limits": { 00:21:42.894 "rw_ios_per_sec": 0, 00:21:42.894 "rw_mbytes_per_sec": 0, 00:21:42.894 "r_mbytes_per_sec": 0, 00:21:42.894 "w_mbytes_per_sec": 0 00:21:42.895 }, 00:21:42.895 "claimed": false, 00:21:42.895 "zoned": false, 00:21:42.895 "supported_io_types": { 00:21:42.895 "read": true, 00:21:42.895 "write": true, 00:21:42.895 "unmap": false, 00:21:42.895 "flush": true, 00:21:42.895 "reset": true, 00:21:42.895 "nvme_admin": true, 00:21:42.895 "nvme_io": true, 00:21:42.895 "nvme_io_md": false, 00:21:42.895 "write_zeroes": true, 00:21:42.895 "zcopy": false, 00:21:42.895 "get_zone_info": false, 00:21:42.895 "zone_management": false, 00:21:42.895 "zone_append": false, 00:21:42.895 "compare": true, 00:21:42.895 "compare_and_write": true, 00:21:42.895 "abort": true, 00:21:42.895 "seek_hole": false, 00:21:42.895 "seek_data": false, 00:21:42.895 "copy": true, 00:21:42.895 "nvme_iov_md": false 00:21:42.895 }, 00:21:42.895 "memory_domains": [ 00:21:42.895 { 00:21:42.895 "dma_device_id": "system", 00:21:42.895 "dma_device_type": 1 00:21:42.895 } 00:21:42.895 ], 00:21:42.895 "driver_specific": { 00:21:42.895 "nvme": [ 00:21:42.895 { 00:21:42.895 "trid": { 00:21:42.895 "trtype": "TCP", 00:21:42.895 "adrfam": "IPv4", 00:21:42.895 "traddr": "10.0.0.2", 00:21:42.895 "trsvcid": "4420", 00:21:42.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:42.895 }, 00:21:42.895 "ctrlr_data": { 00:21:42.895 "cntlid": 2, 00:21:42.895 "vendor_id": "0x8086", 00:21:42.895 "model_number": "SPDK bdev Controller", 00:21:42.895 "serial_number": "00000000000000000000", 00:21:42.895 "firmware_revision": "25.01", 00:21:42.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.895 "oacs": { 00:21:42.895 "security": 0, 00:21:42.895 "format": 0, 00:21:42.895 "firmware": 0, 00:21:42.895 "ns_manage": 0 00:21:42.895 }, 00:21:42.895 "multi_ctrlr": true, 00:21:42.895 "ana_reporting": false 00:21:42.895 }, 00:21:42.895 "vs": { 00:21:42.895 "nvme_version": "1.3" 00:21:42.895 }, 00:21:42.895 "ns_data": { 00:21:42.895 "id": 1, 00:21:42.895 "can_share": true 00:21:42.895 } 00:21:42.895 } 00:21:42.895 ], 00:21:42.895 "mp_policy": "active_passive" 00:21:42.895 } 00:21:42.895 } 00:21:42.895 ] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dbg45dHwEw 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dbg45dHwEw 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.dbg45dHwEw 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 [2024-11-07 10:50:10.358499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:42.895 [2024-11-07 10:50:10.358591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 [2024-11-07 10:50:10.374553] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.895 nvme0n1 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 [ 00:21:42.895 { 00:21:42.895 "name": "nvme0n1", 00:21:42.895 "aliases": [ 00:21:42.895 "94165e63-0e13-4907-aebd-6c4f32f6c6dc" 00:21:42.895 ], 00:21:42.895 "product_name": "NVMe disk", 00:21:42.895 "block_size": 512, 00:21:42.895 "num_blocks": 2097152, 00:21:42.895 "uuid": "94165e63-0e13-4907-aebd-6c4f32f6c6dc", 00:21:42.895 "numa_id": 1, 00:21:42.895 "assigned_rate_limits": { 00:21:42.895 "rw_ios_per_sec": 0, 00:21:42.895 "rw_mbytes_per_sec": 0, 00:21:42.895 "r_mbytes_per_sec": 0, 00:21:42.895 "w_mbytes_per_sec": 0 00:21:42.895 }, 00:21:42.895 "claimed": false, 00:21:42.895 "zoned": false, 00:21:42.895 "supported_io_types": { 00:21:42.895 "read": true, 00:21:42.895 "write": true, 00:21:42.895 "unmap": false, 00:21:42.895 "flush": true, 00:21:42.895 "reset": true, 00:21:42.895 "nvme_admin": true, 00:21:42.895 "nvme_io": true, 00:21:42.895 "nvme_io_md": false, 00:21:42.895 "write_zeroes": true, 00:21:42.895 "zcopy": false, 00:21:42.895 "get_zone_info": false, 00:21:42.895 "zone_management": false, 00:21:42.895 "zone_append": false, 00:21:42.895 "compare": true, 00:21:42.895 "compare_and_write": true, 00:21:42.895 "abort": true, 00:21:42.895 "seek_hole": false, 00:21:42.895 "seek_data": false, 00:21:42.895 "copy": true, 00:21:42.895 "nvme_iov_md": false 00:21:42.895 }, 00:21:42.895 "memory_domains": [ 00:21:42.895 { 00:21:42.895 "dma_device_id": "system", 00:21:42.895 "dma_device_type": 1 00:21:42.895 } 00:21:42.895 ], 00:21:42.895 "driver_specific": { 00:21:42.895 "nvme": [ 00:21:42.895 { 00:21:42.895 "trid": { 00:21:42.895 "trtype": "TCP", 00:21:42.895 "adrfam": "IPv4", 00:21:42.895 "traddr": "10.0.0.2", 00:21:42.895 "trsvcid": "4421", 00:21:42.895 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:42.895 }, 00:21:42.895 "ctrlr_data": { 00:21:42.895 "cntlid": 3, 00:21:42.895 "vendor_id": "0x8086", 00:21:42.895 "model_number": "SPDK bdev Controller", 00:21:42.895 "serial_number": "00000000000000000000", 00:21:42.895 "firmware_revision": "25.01", 00:21:42.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:42.895 "oacs": { 00:21:42.895 "security": 0, 00:21:42.895 "format": 0, 00:21:42.895 "firmware": 0, 00:21:42.895 "ns_manage": 0 00:21:42.895 }, 00:21:42.895 "multi_ctrlr": true, 00:21:42.895 "ana_reporting": false 00:21:42.895 }, 00:21:42.895 "vs": { 00:21:42.895 "nvme_version": "1.3" 00:21:42.895 }, 00:21:42.895 "ns_data": { 00:21:42.895 "id": 1, 00:21:42.895 "can_share": true 00:21:42.895 } 00:21:42.895 } 00:21:42.895 ], 00:21:42.895 "mp_policy": "active_passive" 00:21:42.895 } 00:21:42.895 } 00:21:42.895 ] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.895 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.dbg45dHwEw 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.896 rmmod nvme_tcp 00:21:42.896 rmmod nvme_fabrics 00:21:42.896 rmmod nvme_keyring 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2760548 ']' 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2760548 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2760548 ']' 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2760548 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:42.896 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2760548 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2760548' 00:21:43.155 killing process with pid 2760548 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2760548 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2760548 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.155 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.156 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.156 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.156 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.156 10:50:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:45.693 00:21:45.693 real 0m8.585s 00:21:45.693 user 0m2.580s 00:21:45.693 sys 0m4.267s 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:45.693 ************************************ 00:21:45.693 END TEST nvmf_async_init 00:21:45.693 ************************************ 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.693 ************************************ 00:21:45.693 START TEST dma 00:21:45.693 ************************************ 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:45.693 * Looking for test storage... 00:21:45.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:21:45.693 10:50:12 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.693 --rc genhtml_branch_coverage=1 00:21:45.693 --rc genhtml_function_coverage=1 00:21:45.693 --rc genhtml_legend=1 00:21:45.693 --rc geninfo_all_blocks=1 00:21:45.693 --rc geninfo_unexecuted_blocks=1 00:21:45.693 00:21:45.693 ' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.693 --rc genhtml_branch_coverage=1 00:21:45.693 --rc genhtml_function_coverage=1 00:21:45.693 --rc genhtml_legend=1 00:21:45.693 --rc geninfo_all_blocks=1 00:21:45.693 --rc geninfo_unexecuted_blocks=1 00:21:45.693 00:21:45.693 ' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.693 --rc genhtml_branch_coverage=1 00:21:45.693 --rc genhtml_function_coverage=1 00:21:45.693 --rc genhtml_legend=1 00:21:45.693 --rc geninfo_all_blocks=1 00:21:45.693 --rc geninfo_unexecuted_blocks=1 00:21:45.693 00:21:45.693 ' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:45.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.693 --rc genhtml_branch_coverage=1 00:21:45.693 --rc genhtml_function_coverage=1 00:21:45.693 --rc genhtml_legend=1 00:21:45.693 --rc geninfo_all_blocks=1 00:21:45.693 --rc geninfo_unexecuted_blocks=1 00:21:45.693 00:21:45.693 ' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.693 10:50:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:45.694 00:21:45.694 real 0m0.178s 00:21:45.694 user 0m0.100s 00:21:45.694 sys 0m0.091s 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:45.694 ************************************ 00:21:45.694 END TEST dma 00:21:45.694 ************************************ 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.694 ************************************ 00:21:45.694 START TEST nvmf_identify 00:21:45.694 ************************************ 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:45.694 * Looking for test storage... 00:21:45.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.694 --rc genhtml_branch_coverage=1 00:21:45.694 --rc genhtml_function_coverage=1 00:21:45.694 --rc genhtml_legend=1 00:21:45.694 --rc geninfo_all_blocks=1 00:21:45.694 --rc geninfo_unexecuted_blocks=1 00:21:45.694 00:21:45.694 ' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.694 --rc genhtml_branch_coverage=1 00:21:45.694 --rc genhtml_function_coverage=1 00:21:45.694 --rc genhtml_legend=1 00:21:45.694 --rc geninfo_all_blocks=1 00:21:45.694 --rc geninfo_unexecuted_blocks=1 00:21:45.694 00:21:45.694 ' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.694 --rc genhtml_branch_coverage=1 00:21:45.694 --rc genhtml_function_coverage=1 00:21:45.694 --rc genhtml_legend=1 00:21:45.694 --rc geninfo_all_blocks=1 00:21:45.694 --rc geninfo_unexecuted_blocks=1 00:21:45.694 00:21:45.694 ' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:45.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:45.694 --rc genhtml_branch_coverage=1 00:21:45.694 --rc genhtml_function_coverage=1 00:21:45.694 --rc genhtml_legend=1 00:21:45.694 --rc geninfo_all_blocks=1 00:21:45.694 --rc geninfo_unexecuted_blocks=1 00:21:45.694 00:21:45.694 ' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.694 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:45.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:45.695 10:50:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:50.971 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.231 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.231 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.231 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:21:51.232 00:21:51.232 --- 10.0.0.2 ping statistics --- 00:21:51.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.232 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:51.232 00:21:51.232 --- 10.0.0.1 ping statistics --- 00:21:51.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.232 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.232 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2764275 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2764275 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2764275 ']' 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:51.491 10:50:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.491 [2024-11-07 10:50:18.968319] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:51.491 [2024-11-07 10:50:18.968361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.491 [2024-11-07 10:50:19.037589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.491 [2024-11-07 10:50:19.081109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.491 [2024-11-07 10:50:19.081148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.491 [2024-11-07 10:50:19.081156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.491 [2024-11-07 10:50:19.081162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.491 [2024-11-07 10:50:19.081168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.491 [2024-11-07 10:50:19.082648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.491 [2024-11-07 10:50:19.082673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.491 [2024-11-07 10:50:19.082782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.491 [2024-11-07 10:50:19.082783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 [2024-11-07 10:50:19.190753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 Malloc0 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 [2024-11-07 10:50:19.287797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.752 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:51.752 [ 00:21:51.752 { 00:21:51.752 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.752 "subtype": "Discovery", 00:21:51.752 "listen_addresses": [ 00:21:51.752 { 00:21:51.752 "trtype": "TCP", 00:21:51.752 "adrfam": "IPv4", 00:21:51.752 "traddr": "10.0.0.2", 00:21:51.752 "trsvcid": "4420" 00:21:51.752 } 00:21:51.752 ], 00:21:51.752 "allow_any_host": true, 00:21:51.752 "hosts": [] 00:21:51.752 }, 00:21:51.752 { 00:21:51.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.753 "subtype": "NVMe", 00:21:51.753 "listen_addresses": [ 00:21:51.753 { 00:21:51.753 "trtype": "TCP", 00:21:51.753 "adrfam": "IPv4", 00:21:51.753 "traddr": "10.0.0.2", 00:21:51.753 "trsvcid": "4420" 00:21:51.753 } 00:21:51.753 ], 00:21:51.753 "allow_any_host": true, 00:21:51.753 "hosts": [], 00:21:51.753 "serial_number": "SPDK00000000000001", 00:21:51.753 "model_number": "SPDK bdev Controller", 00:21:51.753 "max_namespaces": 32, 00:21:51.753 "min_cntlid": 1, 00:21:51.753 "max_cntlid": 65519, 00:21:51.753 "namespaces": [ 00:21:51.753 { 00:21:51.753 "nsid": 1, 00:21:51.753 "bdev_name": "Malloc0", 00:21:51.753 "name": "Malloc0", 00:21:51.753 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:51.753 "eui64": "ABCDEF0123456789", 00:21:51.753 "uuid": "b74dc378-59c2-436a-b216-e660679bd03c" 00:21:51.753 } 00:21:51.753 ] 00:21:51.753 } 00:21:51.753 ] 00:21:51.753 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.753 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:51.753 [2024-11-07 10:50:19.340374] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:51.753 [2024-11-07 10:50:19.340422] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764315 ] 00:21:51.753 [2024-11-07 10:50:19.382405] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:51.753 [2024-11-07 10:50:19.386466] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:51.753 [2024-11-07 10:50:19.386473] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:51.753 [2024-11-07 10:50:19.386484] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:51.753 [2024-11-07 10:50:19.386492] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:51.753 [2024-11-07 10:50:19.386935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:51.753 [2024-11-07 10:50:19.386969] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x535690 0 00:21:51.753 [2024-11-07 10:50:19.400443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:51.753 [2024-11-07 10:50:19.400459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:51.753 [2024-11-07 10:50:19.400463] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:51.753 [2024-11-07 10:50:19.400466] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:51.753 [2024-11-07 10:50:19.400503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.400509] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.400513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.400526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:51.753 [2024-11-07 10:50:19.400544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.407444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.407454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.407457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.407472] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:51.753 [2024-11-07 10:50:19.407479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:51.753 [2024-11-07 10:50:19.407483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:51.753 [2024-11-07 10:50:19.407498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.407512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.753 [2024-11-07 10:50:19.407525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.407679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.407685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.407688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.407702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:51.753 [2024-11-07 10:50:19.407708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:51.753 [2024-11-07 10:50:19.407715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407718] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.407727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.753 [2024-11-07 10:50:19.407737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.407798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.407804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.407807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.407815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:51.753 [2024-11-07 10:50:19.407821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.407827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.407839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.753 [2024-11-07 10:50:19.407849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.407910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.407916] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.407919] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.407927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.407936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.407943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.407948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.753 [2024-11-07 10:50:19.407957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.408019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.408024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.408028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408031] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.408035] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:51.753 [2024-11-07 10:50:19.408039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.408048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.408156] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:51.753 [2024-11-07 10:50:19.408161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.408168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.408180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.753 [2024-11-07 10:50:19.408190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.753 [2024-11-07 10:50:19.408251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.753 [2024-11-07 10:50:19.408257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.753 [2024-11-07 10:50:19.408260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.753 [2024-11-07 10:50:19.408267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:51.753 [2024-11-07 10:50:19.408275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.753 [2024-11-07 10:50:19.408282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.753 [2024-11-07 10:50:19.408288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.754 [2024-11-07 10:50:19.408297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.754 [2024-11-07 10:50:19.408358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.754 [2024-11-07 10:50:19.408364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.754 [2024-11-07 10:50:19.408367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.754 [2024-11-07 10:50:19.408374] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:51.754 [2024-11-07 10:50:19.408378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:51.754 [2024-11-07 10:50:19.408398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.754 [2024-11-07 10:50:19.408425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.754 [2024-11-07 10:50:19.408530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:51.754 [2024-11-07 10:50:19.408538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:51.754 [2024-11-07 10:50:19.408542] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408545] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x535690): datao=0, datal=4096, cccid=0 00:21:51.754 [2024-11-07 10:50:19.408550] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x597100) on tqpair(0x535690): expected_datao=0, payload_size=4096 00:21:51.754 [2024-11-07 10:50:19.408554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408561] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.754 [2024-11-07 10:50:19.408588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.754 [2024-11-07 10:50:19.408591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.754 [2024-11-07 10:50:19.408602] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:51.754 [2024-11-07 10:50:19.408606] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:51.754 [2024-11-07 10:50:19.408611] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:51.754 [2024-11-07 10:50:19.408615] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:51.754 [2024-11-07 10:50:19.408619] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:51.754 [2024-11-07 10:50:19.408624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.754 [2024-11-07 10:50:19.408662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.754 [2024-11-07 10:50:19.408726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.754 [2024-11-07 10:50:19.408731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.754 [2024-11-07 10:50:19.408734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:51.754 [2024-11-07 10:50:19.408745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.754 [2024-11-07 10:50:19.408762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.754 [2024-11-07 10:50:19.408781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.754 [2024-11-07 10:50:19.408797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.754 [2024-11-07 10:50:19.408813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:51.754 [2024-11-07 10:50:19.408830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.408839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.754 [2024-11-07 10:50:19.408850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597100, cid 0, qid 0 00:21:51.754 [2024-11-07 10:50:19.408855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597280, cid 1, qid 0 00:21:51.754 [2024-11-07 10:50:19.408859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597400, cid 2, qid 0 00:21:51.754 [2024-11-07 10:50:19.408864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:51.754 [2024-11-07 10:50:19.408868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597700, cid 4, qid 0 00:21:51.754 [2024-11-07 10:50:19.408967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.754 [2024-11-07 10:50:19.408973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.754 [2024-11-07 10:50:19.408976] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.408979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597700) on tqpair=0x535690 00:21:51.754 [2024-11-07 10:50:19.408984] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:51.754 [2024-11-07 10:50:19.408989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:51.754 [2024-11-07 10:50:19.408997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.409007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.754 [2024-11-07 10:50:19.409017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597700, cid 4, qid 0 00:21:51.754 [2024-11-07 10:50:19.409096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:51.754 [2024-11-07 10:50:19.409102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:51.754 [2024-11-07 10:50:19.409106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409109] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x535690): datao=0, datal=4096, cccid=4 00:21:51.754 [2024-11-07 10:50:19.409115] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x597700) on tqpair(0x535690): expected_datao=0, payload_size=4096 00:21:51.754 [2024-11-07 10:50:19.409119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409128] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.754 [2024-11-07 10:50:19.409149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.754 [2024-11-07 10:50:19.409152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597700) on tqpair=0x535690 00:21:51.754 [2024-11-07 10:50:19.409167] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:51.754 [2024-11-07 10:50:19.409188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.409198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.754 [2024-11-07 10:50:19.409204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:51.754 [2024-11-07 10:50:19.409210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x535690) 00:21:51.754 [2024-11-07 10:50:19.409215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.754 [2024-11-07 10:50:19.409229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597700, cid 4, qid 0 00:21:51.754 [2024-11-07 10:50:19.409234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597880, cid 5, qid 0 00:21:51.755 [2024-11-07 10:50:19.409336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:51.755 [2024-11-07 10:50:19.409342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:51.755 [2024-11-07 10:50:19.409345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:51.755 [2024-11-07 10:50:19.409348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x535690): datao=0, datal=1024, cccid=4 00:21:51.755 [2024-11-07 10:50:19.409352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x597700) on tqpair(0x535690): expected_datao=0, payload_size=1024 00:21:51.755 [2024-11-07 10:50:19.409356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:51.755 [2024-11-07 10:50:19.409362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:51.755 [2024-11-07 10:50:19.409365] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:51.755 [2024-11-07 10:50:19.409370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:51.755 [2024-11-07 10:50:19.409375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:51.755 [2024-11-07 10:50:19.409378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:51.755 [2024-11-07 10:50:19.409381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597880) on tqpair=0x535690 00:21:52.035 [2024-11-07 10:50:19.449518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.035 [2024-11-07 10:50:19.449530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.035 [2024-11-07 10:50:19.449533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597700) on tqpair=0x535690 00:21:52.035 [2024-11-07 10:50:19.449548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x535690) 00:21:52.035 [2024-11-07 10:50:19.449562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.035 [2024-11-07 10:50:19.449579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597700, cid 4, qid 0 00:21:52.035 [2024-11-07 10:50:19.449660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.035 [2024-11-07 10:50:19.449666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.035 [2024-11-07 10:50:19.449670] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449673] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x535690): datao=0, datal=3072, cccid=4 00:21:52.035 [2024-11-07 10:50:19.449677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x597700) on tqpair(0x535690): expected_datao=0, payload_size=3072 00:21:52.035 [2024-11-07 10:50:19.449680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449698] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.035 [2024-11-07 10:50:19.449744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.035 [2024-11-07 10:50:19.449748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597700) on tqpair=0x535690 00:21:52.035 [2024-11-07 10:50:19.449759] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.035 [2024-11-07 10:50:19.449762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x535690) 00:21:52.036 [2024-11-07 10:50:19.449768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.036 [2024-11-07 10:50:19.449781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597700, cid 4, qid 0 00:21:52.036 [2024-11-07 10:50:19.449855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.036 [2024-11-07 10:50:19.449860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.036 [2024-11-07 10:50:19.449863] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.036 [2024-11-07 10:50:19.449867] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x535690): datao=0, datal=8, cccid=4 00:21:52.036 [2024-11-07 10:50:19.449871] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x597700) on tqpair(0x535690): expected_datao=0, payload_size=8 00:21:52.036 [2024-11-07 10:50:19.449874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.036 [2024-11-07 10:50:19.449880] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.036 [2024-11-07 10:50:19.449883] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.036 [2024-11-07 10:50:19.494443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.036 [2024-11-07 10:50:19.494452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.036 [2024-11-07 10:50:19.494455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.036 [2024-11-07 10:50:19.494459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597700) on tqpair=0x535690 00:21:52.036 ===================================================== 00:21:52.036 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:52.036 ===================================================== 00:21:52.036 Controller Capabilities/Features 00:21:52.036 ================================ 00:21:52.036 Vendor ID: 0000 00:21:52.036 Subsystem Vendor ID: 0000 00:21:52.036 Serial Number: .................... 00:21:52.036 Model Number: ........................................ 00:21:52.036 Firmware Version: 25.01 00:21:52.036 Recommended Arb Burst: 0 00:21:52.036 IEEE OUI Identifier: 00 00 00 00:21:52.036 Multi-path I/O 00:21:52.036 May have multiple subsystem ports: No 00:21:52.036 May have multiple controllers: No 00:21:52.036 Associated with SR-IOV VF: No 00:21:52.036 Max Data Transfer Size: 131072 00:21:52.036 Max Number of Namespaces: 0 00:21:52.036 Max Number of I/O Queues: 1024 00:21:52.036 NVMe Specification Version (VS): 1.3 00:21:52.036 NVMe Specification Version (Identify): 1.3 00:21:52.036 Maximum Queue Entries: 128 00:21:52.036 Contiguous Queues Required: Yes 00:21:52.036 Arbitration Mechanisms Supported 00:21:52.036 Weighted Round Robin: Not Supported 00:21:52.036 Vendor Specific: Not Supported 00:21:52.036 Reset Timeout: 15000 ms 00:21:52.036 Doorbell Stride: 4 bytes 00:21:52.036 NVM Subsystem Reset: Not Supported 00:21:52.036 Command Sets Supported 00:21:52.036 NVM Command Set: Supported 00:21:52.036 Boot Partition: Not Supported 00:21:52.036 Memory Page Size Minimum: 4096 bytes 00:21:52.036 Memory Page Size Maximum: 4096 bytes 00:21:52.036 Persistent Memory Region: Not Supported 00:21:52.036 Optional Asynchronous Events Supported 00:21:52.036 Namespace Attribute Notices: Not Supported 00:21:52.036 Firmware Activation Notices: Not Supported 00:21:52.036 ANA Change Notices: Not Supported 00:21:52.036 PLE Aggregate Log Change Notices: Not Supported 00:21:52.036 LBA Status Info Alert Notices: Not Supported 00:21:52.036 EGE Aggregate Log Change Notices: Not Supported 00:21:52.036 Normal NVM Subsystem Shutdown event: Not Supported 00:21:52.036 Zone Descriptor Change Notices: Not Supported 00:21:52.036 Discovery Log Change Notices: Supported 00:21:52.036 Controller Attributes 00:21:52.036 128-bit Host Identifier: Not Supported 00:21:52.036 Non-Operational Permissive Mode: Not Supported 00:21:52.036 NVM Sets: Not Supported 00:21:52.036 Read Recovery Levels: Not Supported 00:21:52.036 Endurance Groups: Not Supported 00:21:52.036 Predictable Latency Mode: Not Supported 00:21:52.036 Traffic Based Keep ALive: Not Supported 00:21:52.036 Namespace Granularity: Not Supported 00:21:52.036 SQ Associations: Not Supported 00:21:52.036 UUID List: Not Supported 00:21:52.036 Multi-Domain Subsystem: Not Supported 00:21:52.036 Fixed Capacity Management: Not Supported 00:21:52.036 Variable Capacity Management: Not Supported 00:21:52.036 Delete Endurance Group: Not Supported 00:21:52.036 Delete NVM Set: Not Supported 00:21:52.036 Extended LBA Formats Supported: Not Supported 00:21:52.036 Flexible Data Placement Supported: Not Supported 00:21:52.036 00:21:52.036 Controller Memory Buffer Support 00:21:52.036 ================================ 00:21:52.036 Supported: No 00:21:52.036 00:21:52.036 Persistent Memory Region Support 00:21:52.036 ================================ 00:21:52.036 Supported: No 00:21:52.036 00:21:52.036 Admin Command Set Attributes 00:21:52.036 ============================ 00:21:52.036 Security Send/Receive: Not Supported 00:21:52.036 Format NVM: Not Supported 00:21:52.036 Firmware Activate/Download: Not Supported 00:21:52.036 Namespace Management: Not Supported 00:21:52.036 Device Self-Test: Not Supported 00:21:52.036 Directives: Not Supported 00:21:52.036 NVMe-MI: Not Supported 00:21:52.036 Virtualization Management: Not Supported 00:21:52.036 Doorbell Buffer Config: Not Supported 00:21:52.036 Get LBA Status Capability: Not Supported 00:21:52.036 Command & Feature Lockdown Capability: Not Supported 00:21:52.036 Abort Command Limit: 1 00:21:52.036 Async Event Request Limit: 4 00:21:52.036 Number of Firmware Slots: N/A 00:21:52.036 Firmware Slot 1 Read-Only: N/A 00:21:52.036 Firmware Activation Without Reset: N/A 00:21:52.036 Multiple Update Detection Support: N/A 00:21:52.036 Firmware Update Granularity: No Information Provided 00:21:52.036 Per-Namespace SMART Log: No 00:21:52.036 Asymmetric Namespace Access Log Page: Not Supported 00:21:52.036 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:52.036 Command Effects Log Page: Not Supported 00:21:52.036 Get Log Page Extended Data: Supported 00:21:52.036 Telemetry Log Pages: Not Supported 00:21:52.036 Persistent Event Log Pages: Not Supported 00:21:52.036 Supported Log Pages Log Page: May Support 00:21:52.036 Commands Supported & Effects Log Page: Not Supported 00:21:52.036 Feature Identifiers & Effects Log Page:May Support 00:21:52.036 NVMe-MI Commands & Effects Log Page: May Support 00:21:52.036 Data Area 4 for Telemetry Log: Not Supported 00:21:52.036 Error Log Page Entries Supported: 128 00:21:52.036 Keep Alive: Not Supported 00:21:52.036 00:21:52.036 NVM Command Set Attributes 00:21:52.036 ========================== 00:21:52.036 Submission Queue Entry Size 00:21:52.036 Max: 1 00:21:52.036 Min: 1 00:21:52.036 Completion Queue Entry Size 00:21:52.036 Max: 1 00:21:52.036 Min: 1 00:21:52.036 Number of Namespaces: 0 00:21:52.036 Compare Command: Not Supported 00:21:52.036 Write Uncorrectable Command: Not Supported 00:21:52.036 Dataset Management Command: Not Supported 00:21:52.036 Write Zeroes Command: Not Supported 00:21:52.036 Set Features Save Field: Not Supported 00:21:52.036 Reservations: Not Supported 00:21:52.036 Timestamp: Not Supported 00:21:52.036 Copy: Not Supported 00:21:52.036 Volatile Write Cache: Not Present 00:21:52.036 Atomic Write Unit (Normal): 1 00:21:52.036 Atomic Write Unit (PFail): 1 00:21:52.036 Atomic Compare & Write Unit: 1 00:21:52.036 Fused Compare & Write: Supported 00:21:52.036 Scatter-Gather List 00:21:52.036 SGL Command Set: Supported 00:21:52.036 SGL Keyed: Supported 00:21:52.036 SGL Bit Bucket Descriptor: Not Supported 00:21:52.036 SGL Metadata Pointer: Not Supported 00:21:52.036 Oversized SGL: Not Supported 00:21:52.036 SGL Metadata Address: Not Supported 00:21:52.036 SGL Offset: Supported 00:21:52.036 Transport SGL Data Block: Not Supported 00:21:52.036 Replay Protected Memory Block: Not Supported 00:21:52.036 00:21:52.036 Firmware Slot Information 00:21:52.036 ========================= 00:21:52.036 Active slot: 0 00:21:52.036 00:21:52.036 00:21:52.036 Error Log 00:21:52.036 ========= 00:21:52.036 00:21:52.036 Active Namespaces 00:21:52.036 ================= 00:21:52.036 Discovery Log Page 00:21:52.036 ================== 00:21:52.036 Generation Counter: 2 00:21:52.036 Number of Records: 2 00:21:52.036 Record Format: 0 00:21:52.036 00:21:52.036 Discovery Log Entry 0 00:21:52.036 ---------------------- 00:21:52.036 Transport Type: 3 (TCP) 00:21:52.036 Address Family: 1 (IPv4) 00:21:52.036 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:52.036 Entry Flags: 00:21:52.036 Duplicate Returned Information: 1 00:21:52.036 Explicit Persistent Connection Support for Discovery: 1 00:21:52.036 Transport Requirements: 00:21:52.036 Secure Channel: Not Required 00:21:52.036 Port ID: 0 (0x0000) 00:21:52.036 Controller ID: 65535 (0xffff) 00:21:52.036 Admin Max SQ Size: 128 00:21:52.036 Transport Service Identifier: 4420 00:21:52.036 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:52.037 Transport Address: 10.0.0.2 00:21:52.037 Discovery Log Entry 1 00:21:52.037 ---------------------- 00:21:52.037 Transport Type: 3 (TCP) 00:21:52.037 Address Family: 1 (IPv4) 00:21:52.037 Subsystem Type: 2 (NVM Subsystem) 00:21:52.037 Entry Flags: 00:21:52.037 Duplicate Returned Information: 0 00:21:52.037 Explicit Persistent Connection Support for Discovery: 0 00:21:52.037 Transport Requirements: 00:21:52.037 Secure Channel: Not Required 00:21:52.037 Port ID: 0 (0x0000) 00:21:52.037 Controller ID: 65535 (0xffff) 00:21:52.037 Admin Max SQ Size: 128 00:21:52.037 Transport Service Identifier: 4420 00:21:52.037 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:52.037 Transport Address: 10.0.0.2 [2024-11-07 10:50:19.494537] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:52.037 [2024-11-07 10:50:19.494547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597100) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.037 [2024-11-07 10:50:19.494557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597280) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.037 [2024-11-07 10:50:19.494566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597400) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.037 [2024-11-07 10:50:19.494576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.037 [2024-11-07 10:50:19.494587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.494601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.494614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.494678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.494684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.494687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.494709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.494722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.494798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.494803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.494806] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494814] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:52.037 [2024-11-07 10:50:19.494818] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:52.037 [2024-11-07 10:50:19.494826] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.494839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.494848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.494915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.494921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.494924] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.494935] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.494942] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.494950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.494959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.495065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.495074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495175] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.495181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.495190] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495267] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.495288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.495298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.495395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.495405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.037 [2024-11-07 10:50:19.495513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.037 [2024-11-07 10:50:19.495523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.037 [2024-11-07 10:50:19.495618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.037 [2024-11-07 10:50:19.495624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.037 [2024-11-07 10:50:19.495627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.037 [2024-11-07 10:50:19.495639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.037 [2024-11-07 10:50:19.495643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.495651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.495660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.495724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.495729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.495732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.495743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.495756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.495765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.495841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.495846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.495849] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.495860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.495873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.495882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.495958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.495964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.495967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.495978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.495985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.495990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496098] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.496887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.496899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.496906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.038 [2024-11-07 10:50:19.496912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.038 [2024-11-07 10:50:19.496921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.038 [2024-11-07 10:50:19.496993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.038 [2024-11-07 10:50:19.496999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.038 [2024-11-07 10:50:19.497002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.497005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.038 [2024-11-07 10:50:19.497013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.038 [2024-11-07 10:50:19.497017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497343] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.497892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.497903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.497910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.497916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.497925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.497992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.497998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.498001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.498012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.498025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.498034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.498095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.498101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.498104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.498115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.498128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.498137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.498195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.498201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.498204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.498215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.039 [2024-11-07 10:50:19.498229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.039 [2024-11-07 10:50:19.498238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.039 [2024-11-07 10:50:19.498313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.039 [2024-11-07 10:50:19.498319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.039 [2024-11-07 10:50:19.498322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.039 [2024-11-07 10:50:19.498325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.039 [2024-11-07 10:50:19.498333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.498337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.498340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.040 [2024-11-07 10:50:19.498346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.498355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.040 [2024-11-07 10:50:19.498423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.498428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.498431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.502444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.040 [2024-11-07 10:50:19.502453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.502457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.502460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x535690) 00:21:52.040 [2024-11-07 10:50:19.502466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.502477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x597580, cid 3, qid 0 00:21:52.040 [2024-11-07 10:50:19.502630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.502635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.502638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.502642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x597580) on tqpair=0x535690 00:21:52.040 [2024-11-07 10:50:19.502648] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:52.040 00:21:52.040 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:52.040 [2024-11-07 10:50:19.541028] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:21:52.040 [2024-11-07 10:50:19.541071] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764421 ] 00:21:52.040 [2024-11-07 10:50:19.581070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:52.040 [2024-11-07 10:50:19.581116] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:52.040 [2024-11-07 10:50:19.581121] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:52.040 [2024-11-07 10:50:19.581132] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:52.040 [2024-11-07 10:50:19.581138] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:52.040 [2024-11-07 10:50:19.584666] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:52.040 [2024-11-07 10:50:19.584693] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ee9690 0 00:21:52.040 [2024-11-07 10:50:19.592443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:52.040 [2024-11-07 10:50:19.592455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:52.040 [2024-11-07 10:50:19.592459] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:52.040 [2024-11-07 10:50:19.592463] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:52.040 [2024-11-07 10:50:19.592488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.592494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.592497] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.040 [2024-11-07 10:50:19.592506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:52.040 [2024-11-07 10:50:19.592523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.040 [2024-11-07 10:50:19.600443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.600452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.600455] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.040 [2024-11-07 10:50:19.600469] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:52.040 [2024-11-07 10:50:19.600475] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:52.040 [2024-11-07 10:50:19.600480] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:52.040 [2024-11-07 10:50:19.600491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.040 [2024-11-07 10:50:19.600505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.600518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.040 [2024-11-07 10:50:19.600671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.600676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.600679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.040 [2024-11-07 10:50:19.600689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:52.040 [2024-11-07 10:50:19.600697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:52.040 [2024-11-07 10:50:19.600703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.040 [2024-11-07 10:50:19.600718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.600729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.040 [2024-11-07 10:50:19.600798] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.600804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.600807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.040 [2024-11-07 10:50:19.600815] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:52.040 [2024-11-07 10:50:19.600822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:52.040 [2024-11-07 10:50:19.600827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.040 [2024-11-07 10:50:19.600840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.600850] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.040 [2024-11-07 10:50:19.600911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.600917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.600920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.040 [2024-11-07 10:50:19.600928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:52.040 [2024-11-07 10:50:19.600936] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600939] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.600943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.040 [2024-11-07 10:50:19.600948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.040 [2024-11-07 10:50:19.600958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.040 [2024-11-07 10:50:19.601021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.040 [2024-11-07 10:50:19.601027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.040 [2024-11-07 10:50:19.601030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.040 [2024-11-07 10:50:19.601034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.601038] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:52.041 [2024-11-07 10:50:19.601042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:52.041 [2024-11-07 10:50:19.601049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:52.041 [2024-11-07 10:50:19.601156] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:52.041 [2024-11-07 10:50:19.601161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:52.041 [2024-11-07 10:50:19.601167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.041 [2024-11-07 10:50:19.601193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.041 [2024-11-07 10:50:19.601271] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.041 [2024-11-07 10:50:19.601277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.041 [2024-11-07 10:50:19.601280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.601287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:52.041 [2024-11-07 10:50:19.601296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.041 [2024-11-07 10:50:19.601319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.041 [2024-11-07 10:50:19.601383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.041 [2024-11-07 10:50:19.601389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.041 [2024-11-07 10:50:19.601392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.601399] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:52.041 [2024-11-07 10:50:19.601403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:52.041 [2024-11-07 10:50:19.601420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.041 [2024-11-07 10:50:19.601455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.041 [2024-11-07 10:50:19.601553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.041 [2024-11-07 10:50:19.601559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.041 [2024-11-07 10:50:19.601563] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601566] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=4096, cccid=0 00:21:52.041 [2024-11-07 10:50:19.601570] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b100) on tqpair(0x1ee9690): expected_datao=0, payload_size=4096 00:21:52.041 [2024-11-07 10:50:19.601574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601580] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601586] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.041 [2024-11-07 10:50:19.601611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.041 [2024-11-07 10:50:19.601614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.601624] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:52.041 [2024-11-07 10:50:19.601629] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:52.041 [2024-11-07 10:50:19.601633] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:52.041 [2024-11-07 10:50:19.601637] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:52.041 [2024-11-07 10:50:19.601641] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:52.041 [2024-11-07 10:50:19.601645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601660] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.041 [2024-11-07 10:50:19.601685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.041 [2024-11-07 10:50:19.601752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.041 [2024-11-07 10:50:19.601757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.041 [2024-11-07 10:50:19.601761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.601770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.041 [2024-11-07 10:50:19.601788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.041 [2024-11-07 10:50:19.601805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.041 [2024-11-07 10:50:19.601821] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.041 [2024-11-07 10:50:19.601839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.601856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.601859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.041 [2024-11-07 10:50:19.601865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.041 [2024-11-07 10:50:19.601876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b100, cid 0, qid 0 00:21:52.041 [2024-11-07 10:50:19.601881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b280, cid 1, qid 0 00:21:52.041 [2024-11-07 10:50:19.601885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b400, cid 2, qid 0 00:21:52.041 [2024-11-07 10:50:19.601889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.041 [2024-11-07 10:50:19.601893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.041 [2024-11-07 10:50:19.601989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.041 [2024-11-07 10:50:19.601996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.041 [2024-11-07 10:50:19.601999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.602003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.041 [2024-11-07 10:50:19.602007] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:52.041 [2024-11-07 10:50:19.602012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.602021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.602027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:52.041 [2024-11-07 10:50:19.602033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.602036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.041 [2024-11-07 10:50:19.602039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:52.042 [2024-11-07 10:50:19.602055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.042 [2024-11-07 10:50:19.602118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.602219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.042 [2024-11-07 10:50:19.602298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.042 [2024-11-07 10:50:19.602304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.042 [2024-11-07 10:50:19.602307] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=4096, cccid=4 00:21:52.042 [2024-11-07 10:50:19.602315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b700) on tqpair(0x1ee9690): expected_datao=0, payload_size=4096 00:21:52.042 [2024-11-07 10:50:19.602319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602325] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602362] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:52.042 [2024-11-07 10:50:19.602370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.602405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.042 [2024-11-07 10:50:19.602502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.042 [2024-11-07 10:50:19.602508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.042 [2024-11-07 10:50:19.602511] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602514] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=4096, cccid=4 00:21:52.042 [2024-11-07 10:50:19.602518] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b700) on tqpair(0x1ee9690): expected_datao=0, payload_size=4096 00:21:52.042 [2024-11-07 10:50:19.602522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602532] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602536] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.602620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.042 [2024-11-07 10:50:19.602699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.042 [2024-11-07 10:50:19.602705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.042 [2024-11-07 10:50:19.602708] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602711] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=4096, cccid=4 00:21:52.042 [2024-11-07 10:50:19.602715] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b700) on tqpair(0x1ee9690): expected_datao=0, payload_size=4096 00:21:52.042 [2024-11-07 10:50:19.602719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602725] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602729] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602795] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:52.042 [2024-11-07 10:50:19.602799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:52.042 [2024-11-07 10:50:19.602804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:52.042 [2024-11-07 10:50:19.602815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.602832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.042 [2024-11-07 10:50:19.602856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.042 [2024-11-07 10:50:19.602861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b880, cid 5, qid 0 00:21:52.042 [2024-11-07 10:50:19.602938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.602961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.602964] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b880) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.602976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.602979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.602985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.602994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b880, cid 5, qid 0 00:21:52.042 [2024-11-07 10:50:19.603058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.603064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.042 [2024-11-07 10:50:19.603067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.603070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b880) on tqpair=0x1ee9690 00:21:52.042 [2024-11-07 10:50:19.603078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.042 [2024-11-07 10:50:19.603082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee9690) 00:21:52.042 [2024-11-07 10:50:19.603087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.042 [2024-11-07 10:50:19.603096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b880, cid 5, qid 0 00:21:52.042 [2024-11-07 10:50:19.603164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.042 [2024-11-07 10:50:19.603170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b880) on tqpair=0x1ee9690 00:21:52.043 [2024-11-07 10:50:19.603184] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee9690) 00:21:52.043 [2024-11-07 10:50:19.603193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.043 [2024-11-07 10:50:19.603202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b880, cid 5, qid 0 00:21:52.043 [2024-11-07 10:50:19.603268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.043 [2024-11-07 10:50:19.603274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b880) on tqpair=0x1ee9690 00:21:52.043 [2024-11-07 10:50:19.603296] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ee9690) 00:21:52.043 [2024-11-07 10:50:19.603306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.043 [2024-11-07 10:50:19.603312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ee9690) 00:21:52.043 [2024-11-07 10:50:19.603320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.043 [2024-11-07 10:50:19.603327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ee9690) 00:21:52.043 [2024-11-07 10:50:19.603335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.043 [2024-11-07 10:50:19.603342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ee9690) 00:21:52.043 [2024-11-07 10:50:19.603350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.043 [2024-11-07 10:50:19.603361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b880, cid 5, qid 0 00:21:52.043 [2024-11-07 10:50:19.603365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b700, cid 4, qid 0 00:21:52.043 [2024-11-07 10:50:19.603369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4ba00, cid 6, qid 0 00:21:52.043 [2024-11-07 10:50:19.603373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4bb80, cid 7, qid 0 00:21:52.043 [2024-11-07 10:50:19.603535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.043 [2024-11-07 10:50:19.603542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.043 [2024-11-07 10:50:19.603545] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=8192, cccid=5 00:21:52.043 [2024-11-07 10:50:19.603552] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b880) on tqpair(0x1ee9690): expected_datao=0, payload_size=8192 00:21:52.043 [2024-11-07 10:50:19.603556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603566] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.043 [2024-11-07 10:50:19.603576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.043 [2024-11-07 10:50:19.603579] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603582] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=512, cccid=4 00:21:52.043 [2024-11-07 10:50:19.603586] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4b700) on tqpair(0x1ee9690): expected_datao=0, payload_size=512 00:21:52.043 [2024-11-07 10:50:19.603590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603595] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603599] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.043 [2024-11-07 10:50:19.603611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.043 [2024-11-07 10:50:19.603614] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603617] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=512, cccid=6 00:21:52.043 [2024-11-07 10:50:19.603621] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4ba00) on tqpair(0x1ee9690): expected_datao=0, payload_size=512 00:21:52.043 [2024-11-07 10:50:19.603624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603630] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603633] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:52.043 [2024-11-07 10:50:19.603643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:52.043 [2024-11-07 10:50:19.603646] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603649] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ee9690): datao=0, datal=4096, cccid=7 00:21:52.043 [2024-11-07 10:50:19.603653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f4bb80) on tqpair(0x1ee9690): expected_datao=0, payload_size=4096 00:21:52.043 [2024-11-07 10:50:19.603657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603662] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603665] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.043 [2024-11-07 10:50:19.603678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b880) on tqpair=0x1ee9690 00:21:52.043 [2024-11-07 10:50:19.603694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.043 [2024-11-07 10:50:19.603699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b700) on tqpair=0x1ee9690 00:21:52.043 [2024-11-07 10:50:19.603715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.043 [2024-11-07 10:50:19.603720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4ba00) on tqpair=0x1ee9690 00:21:52.043 [2024-11-07 10:50:19.603732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.043 [2024-11-07 10:50:19.603737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.043 [2024-11-07 10:50:19.603740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.043 [2024-11-07 10:50:19.603744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4bb80) on tqpair=0x1ee9690 00:21:52.043 ===================================================== 00:21:52.043 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.043 ===================================================== 00:21:52.043 Controller Capabilities/Features 00:21:52.043 ================================ 00:21:52.043 Vendor ID: 8086 00:21:52.043 Subsystem Vendor ID: 8086 00:21:52.043 Serial Number: SPDK00000000000001 00:21:52.043 Model Number: SPDK bdev Controller 00:21:52.043 Firmware Version: 25.01 00:21:52.043 Recommended Arb Burst: 6 00:21:52.043 IEEE OUI Identifier: e4 d2 5c 00:21:52.043 Multi-path I/O 00:21:52.043 May have multiple subsystem ports: Yes 00:21:52.043 May have multiple controllers: Yes 00:21:52.043 Associated with SR-IOV VF: No 00:21:52.043 Max Data Transfer Size: 131072 00:21:52.043 Max Number of Namespaces: 32 00:21:52.043 Max Number of I/O Queues: 127 00:21:52.043 NVMe Specification Version (VS): 1.3 00:21:52.043 NVMe Specification Version (Identify): 1.3 00:21:52.043 Maximum Queue Entries: 128 00:21:52.043 Contiguous Queues Required: Yes 00:21:52.043 Arbitration Mechanisms Supported 00:21:52.043 Weighted Round Robin: Not Supported 00:21:52.043 Vendor Specific: Not Supported 00:21:52.043 Reset Timeout: 15000 ms 00:21:52.043 Doorbell Stride: 4 bytes 00:21:52.043 NVM Subsystem Reset: Not Supported 00:21:52.043 Command Sets Supported 00:21:52.043 NVM Command Set: Supported 00:21:52.043 Boot Partition: Not Supported 00:21:52.043 Memory Page Size Minimum: 4096 bytes 00:21:52.043 Memory Page Size Maximum: 4096 bytes 00:21:52.043 Persistent Memory Region: Not Supported 00:21:52.043 Optional Asynchronous Events Supported 00:21:52.043 Namespace Attribute Notices: Supported 00:21:52.043 Firmware Activation Notices: Not Supported 00:21:52.043 ANA Change Notices: Not Supported 00:21:52.043 PLE Aggregate Log Change Notices: Not Supported 00:21:52.043 LBA Status Info Alert Notices: Not Supported 00:21:52.043 EGE Aggregate Log Change Notices: Not Supported 00:21:52.043 Normal NVM Subsystem Shutdown event: Not Supported 00:21:52.043 Zone Descriptor Change Notices: Not Supported 00:21:52.043 Discovery Log Change Notices: Not Supported 00:21:52.043 Controller Attributes 00:21:52.043 128-bit Host Identifier: Supported 00:21:52.043 Non-Operational Permissive Mode: Not Supported 00:21:52.043 NVM Sets: Not Supported 00:21:52.043 Read Recovery Levels: Not Supported 00:21:52.043 Endurance Groups: Not Supported 00:21:52.043 Predictable Latency Mode: Not Supported 00:21:52.043 Traffic Based Keep ALive: Not Supported 00:21:52.043 Namespace Granularity: Not Supported 00:21:52.043 SQ Associations: Not Supported 00:21:52.043 UUID List: Not Supported 00:21:52.043 Multi-Domain Subsystem: Not Supported 00:21:52.044 Fixed Capacity Management: Not Supported 00:21:52.044 Variable Capacity Management: Not Supported 00:21:52.044 Delete Endurance Group: Not Supported 00:21:52.044 Delete NVM Set: Not Supported 00:21:52.044 Extended LBA Formats Supported: Not Supported 00:21:52.044 Flexible Data Placement Supported: Not Supported 00:21:52.044 00:21:52.044 Controller Memory Buffer Support 00:21:52.044 ================================ 00:21:52.044 Supported: No 00:21:52.044 00:21:52.044 Persistent Memory Region Support 00:21:52.044 ================================ 00:21:52.044 Supported: No 00:21:52.044 00:21:52.044 Admin Command Set Attributes 00:21:52.044 ============================ 00:21:52.044 Security Send/Receive: Not Supported 00:21:52.044 Format NVM: Not Supported 00:21:52.044 Firmware Activate/Download: Not Supported 00:21:52.044 Namespace Management: Not Supported 00:21:52.044 Device Self-Test: Not Supported 00:21:52.044 Directives: Not Supported 00:21:52.044 NVMe-MI: Not Supported 00:21:52.044 Virtualization Management: Not Supported 00:21:52.044 Doorbell Buffer Config: Not Supported 00:21:52.044 Get LBA Status Capability: Not Supported 00:21:52.044 Command & Feature Lockdown Capability: Not Supported 00:21:52.044 Abort Command Limit: 4 00:21:52.044 Async Event Request Limit: 4 00:21:52.044 Number of Firmware Slots: N/A 00:21:52.044 Firmware Slot 1 Read-Only: N/A 00:21:52.044 Firmware Activation Without Reset: N/A 00:21:52.044 Multiple Update Detection Support: N/A 00:21:52.044 Firmware Update Granularity: No Information Provided 00:21:52.044 Per-Namespace SMART Log: No 00:21:52.044 Asymmetric Namespace Access Log Page: Not Supported 00:21:52.044 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:52.044 Command Effects Log Page: Supported 00:21:52.044 Get Log Page Extended Data: Supported 00:21:52.044 Telemetry Log Pages: Not Supported 00:21:52.044 Persistent Event Log Pages: Not Supported 00:21:52.044 Supported Log Pages Log Page: May Support 00:21:52.044 Commands Supported & Effects Log Page: Not Supported 00:21:52.044 Feature Identifiers & Effects Log Page:May Support 00:21:52.044 NVMe-MI Commands & Effects Log Page: May Support 00:21:52.044 Data Area 4 for Telemetry Log: Not Supported 00:21:52.044 Error Log Page Entries Supported: 128 00:21:52.044 Keep Alive: Supported 00:21:52.044 Keep Alive Granularity: 10000 ms 00:21:52.044 00:21:52.044 NVM Command Set Attributes 00:21:52.044 ========================== 00:21:52.044 Submission Queue Entry Size 00:21:52.044 Max: 64 00:21:52.044 Min: 64 00:21:52.044 Completion Queue Entry Size 00:21:52.044 Max: 16 00:21:52.044 Min: 16 00:21:52.044 Number of Namespaces: 32 00:21:52.044 Compare Command: Supported 00:21:52.044 Write Uncorrectable Command: Not Supported 00:21:52.044 Dataset Management Command: Supported 00:21:52.044 Write Zeroes Command: Supported 00:21:52.044 Set Features Save Field: Not Supported 00:21:52.044 Reservations: Supported 00:21:52.044 Timestamp: Not Supported 00:21:52.044 Copy: Supported 00:21:52.044 Volatile Write Cache: Present 00:21:52.044 Atomic Write Unit (Normal): 1 00:21:52.044 Atomic Write Unit (PFail): 1 00:21:52.044 Atomic Compare & Write Unit: 1 00:21:52.044 Fused Compare & Write: Supported 00:21:52.044 Scatter-Gather List 00:21:52.044 SGL Command Set: Supported 00:21:52.044 SGL Keyed: Supported 00:21:52.044 SGL Bit Bucket Descriptor: Not Supported 00:21:52.044 SGL Metadata Pointer: Not Supported 00:21:52.044 Oversized SGL: Not Supported 00:21:52.044 SGL Metadata Address: Not Supported 00:21:52.044 SGL Offset: Supported 00:21:52.044 Transport SGL Data Block: Not Supported 00:21:52.044 Replay Protected Memory Block: Not Supported 00:21:52.044 00:21:52.044 Firmware Slot Information 00:21:52.044 ========================= 00:21:52.044 Active slot: 1 00:21:52.044 Slot 1 Firmware Revision: 25.01 00:21:52.044 00:21:52.044 00:21:52.044 Commands Supported and Effects 00:21:52.044 ============================== 00:21:52.044 Admin Commands 00:21:52.044 -------------- 00:21:52.044 Get Log Page (02h): Supported 00:21:52.044 Identify (06h): Supported 00:21:52.044 Abort (08h): Supported 00:21:52.044 Set Features (09h): Supported 00:21:52.044 Get Features (0Ah): Supported 00:21:52.044 Asynchronous Event Request (0Ch): Supported 00:21:52.044 Keep Alive (18h): Supported 00:21:52.044 I/O Commands 00:21:52.044 ------------ 00:21:52.044 Flush (00h): Supported LBA-Change 00:21:52.044 Write (01h): Supported LBA-Change 00:21:52.044 Read (02h): Supported 00:21:52.044 Compare (05h): Supported 00:21:52.044 Write Zeroes (08h): Supported LBA-Change 00:21:52.044 Dataset Management (09h): Supported LBA-Change 00:21:52.044 Copy (19h): Supported LBA-Change 00:21:52.044 00:21:52.044 Error Log 00:21:52.044 ========= 00:21:52.044 00:21:52.044 Arbitration 00:21:52.044 =========== 00:21:52.044 Arbitration Burst: 1 00:21:52.044 00:21:52.044 Power Management 00:21:52.044 ================ 00:21:52.044 Number of Power States: 1 00:21:52.044 Current Power State: Power State #0 00:21:52.044 Power State #0: 00:21:52.044 Max Power: 0.00 W 00:21:52.044 Non-Operational State: Operational 00:21:52.044 Entry Latency: Not Reported 00:21:52.044 Exit Latency: Not Reported 00:21:52.044 Relative Read Throughput: 0 00:21:52.044 Relative Read Latency: 0 00:21:52.044 Relative Write Throughput: 0 00:21:52.044 Relative Write Latency: 0 00:21:52.044 Idle Power: Not Reported 00:21:52.044 Active Power: Not Reported 00:21:52.044 Non-Operational Permissive Mode: Not Supported 00:21:52.044 00:21:52.044 Health Information 00:21:52.044 ================== 00:21:52.044 Critical Warnings: 00:21:52.044 Available Spare Space: OK 00:21:52.044 Temperature: OK 00:21:52.044 Device Reliability: OK 00:21:52.044 Read Only: No 00:21:52.044 Volatile Memory Backup: OK 00:21:52.044 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:52.044 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:52.044 Available Spare: 0% 00:21:52.044 Available Spare Threshold: 0% 00:21:52.044 Life Percentage Used:[2024-11-07 10:50:19.603823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.044 [2024-11-07 10:50:19.603828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ee9690) 00:21:52.044 [2024-11-07 10:50:19.603835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.044 [2024-11-07 10:50:19.603846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4bb80, cid 7, qid 0 00:21:52.044 [2024-11-07 10:50:19.603918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.044 [2024-11-07 10:50:19.603924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.044 [2024-11-07 10:50:19.603927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.044 [2024-11-07 10:50:19.603932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4bb80) on tqpair=0x1ee9690 00:21:52.044 [2024-11-07 10:50:19.603961] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:52.044 [2024-11-07 10:50:19.603971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b100) on tqpair=0x1ee9690 00:21:52.044 [2024-11-07 10:50:19.603977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.044 [2024-11-07 10:50:19.603981] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b280) on tqpair=0x1ee9690 00:21:52.044 [2024-11-07 10:50:19.603986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.044 [2024-11-07 10:50:19.603990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b400) on tqpair=0x1ee9690 00:21:52.044 [2024-11-07 10:50:19.603994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.044 [2024-11-07 10:50:19.603998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.044 [2024-11-07 10:50:19.604003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.044 [2024-11-07 10:50:19.604010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.044 [2024-11-07 10:50:19.604013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.044 [2024-11-07 10:50:19.604016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.044 [2024-11-07 10:50:19.604022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.044 [2024-11-07 10:50:19.604033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.045 [2024-11-07 10:50:19.604098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.045 [2024-11-07 10:50:19.604104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.045 [2024-11-07 10:50:19.604107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.045 [2024-11-07 10:50:19.604116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.045 [2024-11-07 10:50:19.604128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.045 [2024-11-07 10:50:19.604140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.045 [2024-11-07 10:50:19.604212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.045 [2024-11-07 10:50:19.604218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.045 [2024-11-07 10:50:19.604221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.045 [2024-11-07 10:50:19.604229] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:52.045 [2024-11-07 10:50:19.604233] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:52.045 [2024-11-07 10:50:19.604241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.045 [2024-11-07 10:50:19.604254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.045 [2024-11-07 10:50:19.604265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.045 [2024-11-07 10:50:19.604329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.045 [2024-11-07 10:50:19.604335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.045 [2024-11-07 10:50:19.604338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.045 [2024-11-07 10:50:19.604350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.604356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.045 [2024-11-07 10:50:19.604362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.045 [2024-11-07 10:50:19.604371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.045 [2024-11-07 10:50:19.608444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.045 [2024-11-07 10:50:19.608452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.045 [2024-11-07 10:50:19.608456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.608459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.045 [2024-11-07 10:50:19.608469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.608474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.608477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ee9690) 00:21:52.045 [2024-11-07 10:50:19.608483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.045 [2024-11-07 10:50:19.608494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f4b580, cid 3, qid 0 00:21:52.045 [2024-11-07 10:50:19.608651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:52.045 [2024-11-07 10:50:19.608657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:52.045 [2024-11-07 10:50:19.608660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:52.045 [2024-11-07 10:50:19.608663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f4b580) on tqpair=0x1ee9690 00:21:52.045 [2024-11-07 10:50:19.608670] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:21:52.045 0% 00:21:52.045 Data Units Read: 0 00:21:52.045 Data Units Written: 0 00:21:52.045 Host Read Commands: 0 00:21:52.045 Host Write Commands: 0 00:21:52.045 Controller Busy Time: 0 minutes 00:21:52.045 Power Cycles: 0 00:21:52.045 Power On Hours: 0 hours 00:21:52.045 Unsafe Shutdowns: 0 00:21:52.045 Unrecoverable Media Errors: 0 00:21:52.045 Lifetime Error Log Entries: 0 00:21:52.045 Warning Temperature Time: 0 minutes 00:21:52.045 Critical Temperature Time: 0 minutes 00:21:52.045 00:21:52.045 Number of Queues 00:21:52.045 ================ 00:21:52.045 Number of I/O Submission Queues: 127 00:21:52.045 Number of I/O Completion Queues: 127 00:21:52.045 00:21:52.045 Active Namespaces 00:21:52.045 ================= 00:21:52.045 Namespace ID:1 00:21:52.045 Error Recovery Timeout: Unlimited 00:21:52.045 Command Set Identifier: NVM (00h) 00:21:52.045 Deallocate: Supported 00:21:52.045 Deallocated/Unwritten Error: Not Supported 00:21:52.045 Deallocated Read Value: Unknown 00:21:52.045 Deallocate in Write Zeroes: Not Supported 00:21:52.045 Deallocated Guard Field: 0xFFFF 00:21:52.045 Flush: Supported 00:21:52.045 Reservation: Supported 00:21:52.045 Namespace Sharing Capabilities: Multiple Controllers 00:21:52.045 Size (in LBAs): 131072 (0GiB) 00:21:52.045 Capacity (in LBAs): 131072 (0GiB) 00:21:52.045 Utilization (in LBAs): 131072 (0GiB) 00:21:52.045 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:52.045 EUI64: ABCDEF0123456789 00:21:52.045 UUID: b74dc378-59c2-436a-b216-e660679bd03c 00:21:52.045 Thin Provisioning: Not Supported 00:21:52.045 Per-NS Atomic Units: Yes 00:21:52.045 Atomic Boundary Size (Normal): 0 00:21:52.045 Atomic Boundary Size (PFail): 0 00:21:52.045 Atomic Boundary Offset: 0 00:21:52.045 Maximum Single Source Range Length: 65535 00:21:52.045 Maximum Copy Length: 65535 00:21:52.045 Maximum Source Range Count: 1 00:21:52.045 NGUID/EUI64 Never Reused: No 00:21:52.045 Namespace Write Protected: No 00:21:52.045 Number of LBA Formats: 1 00:21:52.045 Current LBA Format: LBA Format #00 00:21:52.045 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:52.045 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.045 rmmod nvme_tcp 00:21:52.045 rmmod nvme_fabrics 00:21:52.045 rmmod nvme_keyring 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2764275 ']' 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2764275 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2764275 ']' 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2764275 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:52.045 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2764275 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2764275' 00:21:52.305 killing process with pid 2764275 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2764275 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2764275 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.305 10:50:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.843 00:21:54.843 real 0m8.908s 00:21:54.843 user 0m5.051s 00:21:54.843 sys 0m4.589s 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:54.843 ************************************ 00:21:54.843 END TEST nvmf_identify 00:21:54.843 ************************************ 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.843 ************************************ 00:21:54.843 START TEST nvmf_perf 00:21:54.843 ************************************ 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:54.843 * Looking for test storage... 00:21:54.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.843 --rc genhtml_branch_coverage=1 00:21:54.843 --rc genhtml_function_coverage=1 00:21:54.843 --rc genhtml_legend=1 00:21:54.843 --rc geninfo_all_blocks=1 00:21:54.843 --rc geninfo_unexecuted_blocks=1 00:21:54.843 00:21:54.843 ' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.843 --rc genhtml_branch_coverage=1 00:21:54.843 --rc genhtml_function_coverage=1 00:21:54.843 --rc genhtml_legend=1 00:21:54.843 --rc geninfo_all_blocks=1 00:21:54.843 --rc geninfo_unexecuted_blocks=1 00:21:54.843 00:21:54.843 ' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.843 --rc genhtml_branch_coverage=1 00:21:54.843 --rc genhtml_function_coverage=1 00:21:54.843 --rc genhtml_legend=1 00:21:54.843 --rc geninfo_all_blocks=1 00:21:54.843 --rc geninfo_unexecuted_blocks=1 00:21:54.843 00:21:54.843 ' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:54.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:54.843 --rc genhtml_branch_coverage=1 00:21:54.843 --rc genhtml_function_coverage=1 00:21:54.843 --rc genhtml_legend=1 00:21:54.843 --rc geninfo_all_blocks=1 00:21:54.843 --rc geninfo_unexecuted_blocks=1 00:21:54.843 00:21:54.843 ' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:54.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.843 10:50:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:01.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:01.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:01.415 Found net devices under 0000:86:00.0: cvl_0_0 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:01.415 Found net devices under 0000:86:00.1: cvl_0_1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.415 10:50:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.415 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.415 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:22:01.416 00:22:01.416 --- 10.0.0.2 ping statistics --- 00:22:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.416 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:22:01.416 00:22:01.416 --- 10.0.0.1 ping statistics --- 00:22:01.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.416 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2768035 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2768035 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2768035 ']' 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:01.416 [2024-11-07 10:50:28.193806] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:01.416 [2024-11-07 10:50:28.193854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.416 [2024-11-07 10:50:28.257704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.416 [2024-11-07 10:50:28.301131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.416 [2024-11-07 10:50:28.301171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.416 [2024-11-07 10:50:28.301178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.416 [2024-11-07 10:50:28.301184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.416 [2024-11-07 10:50:28.301190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.416 [2024-11-07 10:50:28.305452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.416 [2024-11-07 10:50:28.305471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.416 [2024-11-07 10:50:28.305578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.416 [2024-11-07 10:50:28.305580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:01.416 10:50:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:03.948 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:03.948 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:04.206 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:04.206 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:04.464 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:04.464 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:04.464 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:04.464 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:04.464 10:50:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.464 [2024-11-07 10:50:32.096638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.464 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.722 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:04.722 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.981 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:04.981 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:05.239 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.239 [2024-11-07 10:50:32.905001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.497 10:50:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:05.497 10:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:05.497 10:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:05.497 10:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:05.497 10:50:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:06.876 Initializing NVMe Controllers 00:22:06.876 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:06.876 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:06.876 Initialization complete. Launching workers. 00:22:06.876 ======================================================== 00:22:06.876 Latency(us) 00:22:06.876 Device Information : IOPS MiB/s Average min max 00:22:06.876 PCIE (0000:5e:00.0) NSID 1 from core 0: 97309.22 380.11 328.23 45.42 4299.56 00:22:06.876 ======================================================== 00:22:06.876 Total : 97309.22 380.11 328.23 45.42 4299.56 00:22:06.876 00:22:06.876 10:50:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:08.251 Initializing NVMe Controllers 00:22:08.251 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.251 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:08.251 Initialization complete. Launching workers. 00:22:08.251 ======================================================== 00:22:08.251 Latency(us) 00:22:08.251 Device Information : IOPS MiB/s Average min max 00:22:08.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.00 0.43 9448.41 115.27 44827.23 00:22:08.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15223.09 7191.40 48880.11 00:22:08.251 ======================================================== 00:22:08.251 Total : 175.00 0.68 11626.29 115.27 48880.11 00:22:08.251 00:22:08.251 10:50:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.622 Initializing NVMe Controllers 00:22:09.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:09.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:09.622 Initialization complete. Launching workers. 00:22:09.622 ======================================================== 00:22:09.623 Latency(us) 00:22:09.623 Device Information : IOPS MiB/s Average min max 00:22:09.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10867.71 42.45 2944.50 476.33 6409.65 00:22:09.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3886.90 15.18 8274.33 5568.86 15967.10 00:22:09.623 ======================================================== 00:22:09.623 Total : 14754.60 57.64 4348.57 476.33 15967.10 00:22:09.623 00:22:09.623 10:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:09.623 10:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:09.623 10:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:12.151 Initializing NVMe Controllers 00:22:12.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.151 Controller IO queue size 128, less than required. 00:22:12.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.151 Controller IO queue size 128, less than required. 00:22:12.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:12.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:12.151 Initialization complete. Launching workers. 00:22:12.151 ======================================================== 00:22:12.151 Latency(us) 00:22:12.151 Device Information : IOPS MiB/s Average min max 00:22:12.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1757.98 439.49 73764.72 54719.89 120691.19 00:22:12.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.49 152.12 220106.65 74345.84 358932.53 00:22:12.151 ======================================================== 00:22:12.151 Total : 2366.47 591.62 111393.74 54719.89 358932.53 00:22:12.151 00:22:12.151 10:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:12.716 No valid NVMe controllers or AIO or URING devices found 00:22:12.716 Initializing NVMe Controllers 00:22:12.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.717 Controller IO queue size 128, less than required. 00:22:12.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:12.717 Controller IO queue size 128, less than required. 00:22:12.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:12.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:12.717 WARNING: Some requested NVMe devices were skipped 00:22:12.717 10:50:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:15.247 Initializing NVMe Controllers 00:22:15.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.247 Controller IO queue size 128, less than required. 00:22:15.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.247 Controller IO queue size 128, less than required. 00:22:15.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:15.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.247 Initialization complete. Launching workers. 00:22:15.247 00:22:15.247 ==================== 00:22:15.247 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:15.247 TCP transport: 00:22:15.247 polls: 12548 00:22:15.247 idle_polls: 7678 00:22:15.247 sock_completions: 4870 00:22:15.247 nvme_completions: 6405 00:22:15.247 submitted_requests: 9582 00:22:15.247 queued_requests: 1 00:22:15.247 00:22:15.247 ==================== 00:22:15.247 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:15.247 TCP transport: 00:22:15.247 polls: 12358 00:22:15.247 idle_polls: 7849 00:22:15.247 sock_completions: 4509 00:22:15.247 nvme_completions: 5931 00:22:15.247 submitted_requests: 8786 00:22:15.247 queued_requests: 1 00:22:15.247 ======================================================== 00:22:15.247 Latency(us) 00:22:15.247 Device Information : IOPS MiB/s Average min max 00:22:15.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1600.05 400.01 81934.34 43291.19 143417.31 00:22:15.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1481.62 370.41 86626.08 46156.76 143285.67 00:22:15.247 ======================================================== 00:22:15.247 Total : 3081.68 770.42 84190.06 43291.19 143417.31 00:22:15.247 00:22:15.247 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:15.247 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.247 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:15.247 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.248 rmmod nvme_tcp 00:22:15.248 rmmod nvme_fabrics 00:22:15.248 rmmod nvme_keyring 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2768035 ']' 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2768035 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2768035 ']' 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2768035 00:22:15.248 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2768035 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2768035' 00:22:15.507 killing process with pid 2768035 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2768035 00:22:15.507 10:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2768035 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.880 10:50:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.413 00:22:19.413 real 0m24.415s 00:22:19.413 user 1m4.127s 00:22:19.413 sys 0m8.193s 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:19.413 ************************************ 00:22:19.413 END TEST nvmf_perf 00:22:19.413 ************************************ 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.413 ************************************ 00:22:19.413 START TEST nvmf_fio_host 00:22:19.413 ************************************ 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:19.413 * Looking for test storage... 00:22:19.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:19.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.413 --rc genhtml_branch_coverage=1 00:22:19.413 --rc genhtml_function_coverage=1 00:22:19.413 --rc genhtml_legend=1 00:22:19.413 --rc geninfo_all_blocks=1 00:22:19.413 --rc geninfo_unexecuted_blocks=1 00:22:19.413 00:22:19.413 ' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:19.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.413 --rc genhtml_branch_coverage=1 00:22:19.413 --rc genhtml_function_coverage=1 00:22:19.413 --rc genhtml_legend=1 00:22:19.413 --rc geninfo_all_blocks=1 00:22:19.413 --rc geninfo_unexecuted_blocks=1 00:22:19.413 00:22:19.413 ' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:19.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.413 --rc genhtml_branch_coverage=1 00:22:19.413 --rc genhtml_function_coverage=1 00:22:19.413 --rc genhtml_legend=1 00:22:19.413 --rc geninfo_all_blocks=1 00:22:19.413 --rc geninfo_unexecuted_blocks=1 00:22:19.413 00:22:19.413 ' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:19.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.413 --rc genhtml_branch_coverage=1 00:22:19.413 --rc genhtml_function_coverage=1 00:22:19.413 --rc genhtml_legend=1 00:22:19.413 --rc geninfo_all_blocks=1 00:22:19.413 --rc geninfo_unexecuted_blocks=1 00:22:19.413 00:22:19.413 ' 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.413 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.414 10:50:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:24.682 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:24.682 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:24.682 Found net devices under 0000:86:00.0: cvl_0_0 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:24.682 Found net devices under 0000:86:00.1: cvl_0_1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.682 10:50:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:22:24.682 00:22:24.682 --- 10.0.0.2 ping statistics --- 00:22:24.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.682 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:22:24.682 00:22:24.682 --- 10.0.0.1 ping statistics --- 00:22:24.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.682 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.682 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2774067 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2774067 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2774067 ']' 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.683 [2024-11-07 10:50:52.122002] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:24.683 [2024-11-07 10:50:52.122048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.683 [2024-11-07 10:50:52.188873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.683 [2024-11-07 10:50:52.231397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.683 [2024-11-07 10:50:52.231441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.683 [2024-11-07 10:50:52.231448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.683 [2024-11-07 10:50:52.231455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.683 [2024-11-07 10:50:52.231460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.683 [2024-11-07 10:50:52.232893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.683 [2024-11-07 10:50:52.232994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.683 [2024-11-07 10:50:52.233080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.683 [2024-11-07 10:50:52.233082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:24.683 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.940 [2024-11-07 10:50:52.493668] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.940 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:24.940 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.940 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.940 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:25.198 Malloc1 00:22:25.198 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.456 10:50:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:25.713 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.713 [2024-11-07 10:50:53.355443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:25.971 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:26.228 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:26.228 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:26.228 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:26.228 10:50:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:26.486 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:26.486 fio-3.35 00:22:26.486 Starting 1 thread 00:22:29.015 00:22:29.015 test: (groupid=0, jobs=1): err= 0: pid=2774522: Thu Nov 7 10:50:56 2024 00:22:29.015 read: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec) 00:22:29.015 slat (nsec): min=1584, max=243048, avg=1740.11, stdev=2247.03 00:22:29.015 clat (usec): min=3086, max=10800, avg=6111.70, stdev=457.41 00:22:29.015 lat (usec): min=3119, max=10802, avg=6113.44, stdev=457.33 00:22:29.015 clat percentiles (usec): 00:22:29.015 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:29.015 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6194], 00:22:29.015 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:22:29.015 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 8291], 99.95th=[10290], 00:22:29.015 | 99.99th=[10552] 00:22:29.015 bw ( KiB/s): min=45600, max=46976, per=99.91%, avg=46370.00, stdev=573.04, samples=4 00:22:29.015 iops : min=11400, max=11744, avg=11592.50, stdev=143.26, samples=4 00:22:29.015 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.2MiB/2005msec); 0 zone resets 00:22:29.015 slat (nsec): min=1627, max=226877, avg=1802.76, stdev=1666.25 00:22:29.015 clat (usec): min=2440, max=9611, avg=4915.67, stdev=381.75 00:22:29.015 lat (usec): min=2455, max=9612, avg=4917.47, stdev=381.79 00:22:29.015 clat percentiles (usec): 00:22:29.015 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:29.015 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:22:29.015 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:29.015 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7242], 99.95th=[ 8291], 00:22:29.015 | 99.99th=[ 9503] 00:22:29.015 bw ( KiB/s): min=45824, max=46464, per=100.00%, avg=46096.00, stdev=278.36, samples=4 00:22:29.015 iops : min=11456, max=11616, avg=11524.00, stdev=69.59, samples=4 00:22:29.015 lat (msec) : 4=0.43%, 10=99.54%, 20=0.03% 00:22:29.015 cpu : usr=72.90%, sys=25.80%, ctx=83, majf=0, minf=2 00:22:29.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:29.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:29.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:29.015 issued rwts: total=23264,23096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:29.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:29.015 00:22:29.015 Run status group 0 (all jobs): 00:22:29.015 READ: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:22:29.015 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.2MiB (94.6MB), run=2005-2005msec 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:29.015 10:50:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:29.015 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:29.015 fio-3.35 00:22:29.015 Starting 1 thread 00:22:31.947 00:22:31.947 test: (groupid=0, jobs=1): err= 0: pid=2775086: Thu Nov 7 10:50:58 2024 00:22:31.947 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2005msec) 00:22:31.947 slat (nsec): min=2555, max=97418, avg=2836.12, stdev=1439.20 00:22:31.947 clat (usec): min=1540, max=12953, avg=6820.80, stdev=1545.47 00:22:31.947 lat (usec): min=1543, max=12967, avg=6823.63, stdev=1545.60 00:22:31.947 clat percentiles (usec): 00:22:31.947 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5473], 00:22:31.947 | 30.00th=[ 5932], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 7308], 00:22:31.947 | 70.00th=[ 7701], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9241], 00:22:31.947 | 99.00th=[10683], 99.50th=[11076], 99.90th=[11863], 99.95th=[12387], 00:22:31.947 | 99.99th=[12911] 00:22:31.947 bw ( KiB/s): min=80256, max=92608, per=50.82%, avg=87408.00, stdev=5644.99, samples=4 00:22:31.947 iops : min= 5016, max= 5788, avg=5463.00, stdev=352.81, samples=4 00:22:31.947 write: IOPS=6343, BW=99.1MiB/s (104MB/s)(179MiB/1805msec); 0 zone resets 00:22:31.947 slat (usec): min=29, max=322, avg=31.70, stdev= 6.45 00:22:31.947 clat (usec): min=3764, max=14184, avg=8825.97, stdev=1578.64 00:22:31.947 lat (usec): min=3794, max=14296, avg=8857.66, stdev=1579.71 00:22:31.947 clat percentiles (usec): 00:22:31.947 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7504], 00:22:31.947 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:31.947 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[11731], 00:22:31.947 | 99.00th=[12780], 99.50th=[13304], 99.90th=[13960], 99.95th=[14091], 00:22:31.947 | 99.99th=[14091] 00:22:31.947 bw ( KiB/s): min=85120, max=95200, per=89.65%, avg=90992.00, stdev=4641.69, samples=4 00:22:31.947 iops : min= 5320, max= 5950, avg=5687.00, stdev=290.11, samples=4 00:22:31.947 lat (msec) : 2=0.03%, 4=1.59%, 10=89.24%, 20=9.14% 00:22:31.947 cpu : usr=86.28%, sys=12.82%, ctx=52, majf=0, minf=2 00:22:31.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:31.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:31.947 issued rwts: total=21555,11450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:31.947 00:22:31.947 Run status group 0 (all jobs): 00:22:31.947 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (353MB), run=2005-2005msec 00:22:31.947 WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=179MiB (188MB), run=1805-1805msec 00:22:31.947 10:50:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.947 rmmod nvme_tcp 00:22:31.947 rmmod nvme_fabrics 00:22:31.947 rmmod nvme_keyring 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2774067 ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2774067 ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2774067' 00:22:31.947 killing process with pid 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2774067 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.947 10:50:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.482 00:22:34.482 real 0m15.002s 00:22:34.482 user 0m45.696s 00:22:34.482 sys 0m6.079s 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.482 ************************************ 00:22:34.482 END TEST nvmf_fio_host 00:22:34.482 ************************************ 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.482 ************************************ 00:22:34.482 START TEST nvmf_failover 00:22:34.482 ************************************ 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:34.482 * Looking for test storage... 00:22:34.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.482 --rc genhtml_branch_coverage=1 00:22:34.482 --rc genhtml_function_coverage=1 00:22:34.482 --rc genhtml_legend=1 00:22:34.482 --rc geninfo_all_blocks=1 00:22:34.482 --rc geninfo_unexecuted_blocks=1 00:22:34.482 00:22:34.482 ' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.482 --rc genhtml_branch_coverage=1 00:22:34.482 --rc genhtml_function_coverage=1 00:22:34.482 --rc genhtml_legend=1 00:22:34.482 --rc geninfo_all_blocks=1 00:22:34.482 --rc geninfo_unexecuted_blocks=1 00:22:34.482 00:22:34.482 ' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.482 --rc genhtml_branch_coverage=1 00:22:34.482 --rc genhtml_function_coverage=1 00:22:34.482 --rc genhtml_legend=1 00:22:34.482 --rc geninfo_all_blocks=1 00:22:34.482 --rc geninfo_unexecuted_blocks=1 00:22:34.482 00:22:34.482 ' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.482 --rc genhtml_branch_coverage=1 00:22:34.482 --rc genhtml_function_coverage=1 00:22:34.482 --rc genhtml_legend=1 00:22:34.482 --rc geninfo_all_blocks=1 00:22:34.482 --rc geninfo_unexecuted_blocks=1 00:22:34.482 00:22:34.482 ' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.482 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.483 10:51:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.917 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:39.918 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:39.918 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:39.918 Found net devices under 0000:86:00.0: cvl_0_0 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:39.918 Found net devices under 0000:86:00.1: cvl_0_1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:22:39.918 00:22:39.918 --- 10.0.0.2 ping statistics --- 00:22:39.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.918 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:22:39.918 10:51:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:22:39.918 00:22:39.918 --- 10.0.0.1 ping statistics --- 00:22:39.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.918 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2778970 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2778970 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2778970 ']' 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.918 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:39.918 [2024-11-07 10:51:07.091895] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:39.919 [2024-11-07 10:51:07.091943] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.919 [2024-11-07 10:51:07.159568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:39.919 [2024-11-07 10:51:07.201530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.919 [2024-11-07 10:51:07.201567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.919 [2024-11-07 10:51:07.201574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.919 [2024-11-07 10:51:07.201580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.919 [2024-11-07 10:51:07.201586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.919 [2024-11-07 10:51:07.203040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.919 [2024-11-07 10:51:07.203108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.919 [2024-11-07 10:51:07.203109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:39.919 [2024-11-07 10:51:07.508110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.919 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:40.177 Malloc0 00:22:40.177 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.434 10:51:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.691 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.691 [2024-11-07 10:51:08.358327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.948 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.948 [2024-11-07 10:51:08.566913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.948 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:41.206 [2024-11-07 10:51:08.775611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2779413 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2779413 /var/tmp/bdevperf.sock 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2779413 ']' 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:41.206 10:51:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:41.463 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:41.463 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:41.463 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.721 NVMe0n1 00:22:41.721 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:41.978 00:22:41.978 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2779758 00:22:41.978 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.978 10:51:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:43.350 10:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.350 [2024-11-07 10:51:10.791643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.350 [2024-11-07 10:51:10.791744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 [2024-11-07 10:51:10.791781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0b2d0 is same with the state(6) to be set 00:22:43.351 10:51:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:46.631 10:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:46.631 00:22:46.631 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:46.888 10:51:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:50.167 10:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.167 [2024-11-07 10:51:17.525497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.167 10:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:51.100 10:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:51.100 [2024-11-07 10:51:18.744218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.100 [2024-11-07 10:51:18.744258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.100 [2024-11-07 10:51:18.744266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.100 [2024-11-07 10:51:18.744283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.100 [2024-11-07 10:51:18.744290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.100 [2024-11-07 10:51:18.744296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.101 [2024-11-07 10:51:18.744652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ce30 is same with the state(6) to be set 00:22:51.359 10:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2779758 00:22:57.927 { 00:22:57.927 "results": [ 00:22:57.927 { 00:22:57.927 "job": "NVMe0n1", 00:22:57.927 "core_mask": "0x1", 00:22:57.927 "workload": "verify", 00:22:57.927 "status": "finished", 00:22:57.927 "verify_range": { 00:22:57.927 "start": 0, 00:22:57.927 "length": 16384 00:22:57.927 }, 00:22:57.927 "queue_depth": 128, 00:22:57.927 "io_size": 4096, 00:22:57.927 "runtime": 15.004351, 00:22:57.927 "iops": 10781.206064827462, 00:22:57.927 "mibps": 42.11408619073227, 00:22:57.927 "io_failed": 10605, 00:22:57.927 "io_timeout": 0, 00:22:57.927 "avg_latency_us": 11119.934770692973, 00:22:57.927 "min_latency_us": 430.9704347826087, 00:22:57.927 "max_latency_us": 28607.888695652175 00:22:57.927 } 00:22:57.927 ], 00:22:57.927 "core_count": 1 00:22:57.927 } 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2779413 ']' 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2779413' 00:22:57.927 killing process with pid 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2779413 00:22:57.927 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.927 [2024-11-07 10:51:08.852688] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:22:57.927 [2024-11-07 10:51:08.852747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779413 ] 00:22:57.927 [2024-11-07 10:51:08.918321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.927 [2024-11-07 10:51:08.960679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.927 Running I/O for 15 seconds... 00:22:57.927 10888.00 IOPS, 42.53 MiB/s [2024-11-07T09:51:25.598Z] [2024-11-07 10:51:10.792086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.927 [2024-11-07 10:51:10.792121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.927 [2024-11-07 10:51:10.792146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.927 [2024-11-07 10:51:10.792246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.927 [2024-11-07 10:51:10.792253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.928 [2024-11-07 10:51:10.792615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.928 [2024-11-07 10:51:10.792836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.928 [2024-11-07 10:51:10.792842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.929 [2024-11-07 10:51:10.792857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.792985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.792993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.929 [2024-11-07 10:51:10.793442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.929 [2024-11-07 10:51:10.793449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.793987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.793995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.794002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.794010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.930 [2024-11-07 10:51:10.794025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.930 [2024-11-07 10:51:10.794033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:10.794047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:10.794063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.931 [2024-11-07 10:51:10.794088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.931 [2024-11-07 10:51:10.794094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:22:57.931 [2024-11-07 10:51:10.794101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794151] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:57.931 [2024-11-07 10:51:10.794173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.931 [2024-11-07 10:51:10.794181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.931 [2024-11-07 10:51:10.794195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.931 [2024-11-07 10:51:10.794209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.931 [2024-11-07 10:51:10.794222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:10.794229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:57.931 [2024-11-07 10:51:10.797086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:57.931 [2024-11-07 10:51:10.797115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac350 (9): Bad file descriptor 00:22:57.931 [2024-11-07 10:51:10.954054] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:57.931 10007.50 IOPS, 39.09 MiB/s [2024-11-07T09:51:25.602Z] 10364.00 IOPS, 40.48 MiB/s [2024-11-07T09:51:25.602Z] 10510.50 IOPS, 41.06 MiB/s [2024-11-07T09:51:25.602Z] [2024-11-07 10:51:14.307658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.307985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.307991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.931 [2024-11-07 10:51:14.308126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.931 [2024-11-07 10:51:14.308132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.932 [2024-11-07 10:51:14.308564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.932 [2024-11-07 10:51:14.308570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.933 [2024-11-07 10:51:14.308585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.933 [2024-11-07 10:51:14.308600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.933 [2024-11-07 10:51:14.308614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.933 [2024-11-07 10:51:14.308736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.933 [2024-11-07 10:51:14.308750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.308990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.308998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.933 [2024-11-07 10:51:14.309162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.933 [2024-11-07 10:51:14.309169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.934 [2024-11-07 10:51:14.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.934 [2024-11-07 10:51:14.309659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.934 [2024-11-07 10:51:14.309665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55544 len:8 PRP1 0x0 PRP2 0x0 00:22:57.934 [2024-11-07 10:51:14.309676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309721] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:57.934 [2024-11-07 10:51:14.309743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.934 [2024-11-07 10:51:14.309750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.934 [2024-11-07 10:51:14.309764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.934 [2024-11-07 10:51:14.309778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.934 [2024-11-07 10:51:14.309793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.934 [2024-11-07 10:51:14.309799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:57.934 [2024-11-07 10:51:14.309822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac350 (9): Bad file descriptor 00:22:57.934 [2024-11-07 10:51:14.312656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:57.934 [2024-11-07 10:51:14.344047] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:57.934 10497.40 IOPS, 41.01 MiB/s [2024-11-07T09:51:25.605Z] 10545.17 IOPS, 41.19 MiB/s [2024-11-07T09:51:25.605Z] 10586.43 IOPS, 41.35 MiB/s [2024-11-07T09:51:25.605Z] 10635.25 IOPS, 41.54 MiB/s [2024-11-07T09:51:25.605Z] 10670.89 IOPS, 41.68 MiB/s [2024-11-07T09:51:25.605Z] [2024-11-07 10:51:18.745973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.934 [2024-11-07 10:51:18.746008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.935 [2024-11-07 10:51:18.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.935 [2024-11-07 10:51:18.746047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.935 [2024-11-07 10:51:18.746062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.935 [2024-11-07 10:51:18.746077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.935 [2024-11-07 10:51:18.746098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.935 [2024-11-07 10:51:18.746586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.935 [2024-11-07 10:51:18.746597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.746988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.936 [2024-11-07 10:51:18.747076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.936 [2024-11-07 10:51:18.747083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.937 [2024-11-07 10:51:18.747295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.937 [2024-11-07 10:51:18.747310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:57.937 [2024-11-07 10:51:18.747324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58320 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58328 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58336 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58344 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58352 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58360 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58368 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58376 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58384 len:8 PRP1 0x0 PRP2 0x0 00:22:57.937 [2024-11-07 10:51:18.747710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.937 [2024-11-07 10:51:18.747716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.937 [2024-11-07 10:51:18.747721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.937 [2024-11-07 10:51:18.747726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58392 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58400 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58408 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58416 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58424 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58432 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58440 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58448 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58456 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58464 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58472 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.747975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.747982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.747988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.747995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58480 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.748025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58488 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.748053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58496 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.748082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58504 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.748109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58512 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.748135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.748142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.748149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.748154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58528 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58536 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58544 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58552 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58560 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58568 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.938 [2024-11-07 10:51:18.758428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.938 [2024-11-07 10:51:18.758449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58576 len:8 PRP1 0x0 PRP2 0x0 00:22:57.938 [2024-11-07 10:51:18.758458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.938 [2024-11-07 10:51:18.758468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58584 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58592 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58600 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58608 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58616 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58624 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58632 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58640 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57624 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57632 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57640 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57648 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57656 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57664 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57688 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57696 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.758977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.758986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.758993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.759000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57704 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.759009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.759018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.759027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.759034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57712 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.759045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.759055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.759062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.759069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57720 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.759080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.759089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.939 [2024-11-07 10:51:18.759096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.939 [2024-11-07 10:51:18.759103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57728 len:8 PRP1 0x0 PRP2 0x0 00:22:57.939 [2024-11-07 10:51:18.759112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.939 [2024-11-07 10:51:18.759121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57736 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57744 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57752 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57760 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57768 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57776 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57784 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57792 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57800 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57808 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57816 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57824 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57832 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57840 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57848 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57856 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57864 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57872 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57880 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57888 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57896 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.940 [2024-11-07 10:51:18.759801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.940 [2024-11-07 10:51:18.759808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.940 [2024-11-07 10:51:18.759815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57904 len:8 PRP1 0x0 PRP2 0x0 00:22:57.940 [2024-11-07 10:51:18.759824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.759846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57912 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.759856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.759879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57920 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.759887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.759911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57928 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.759920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.759942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57936 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.759974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57944 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.759982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.759991] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.759998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57952 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57960 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57968 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57976 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57984 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57992 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58000 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58008 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58016 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58024 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58032 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58040 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58048 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58056 len:8 PRP1 0x0 PRP2 0x0 00:22:57.941 [2024-11-07 10:51:18.760429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.941 [2024-11-07 10:51:18.760443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.941 [2024-11-07 10:51:18.760450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.941 [2024-11-07 10:51:18.760457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58064 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.760466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.760475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.766841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.766874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58072 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.766886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.766900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.766908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.766916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58080 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.766926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.766941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.766949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.766956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58088 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.766966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.766975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.766982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.766991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58096 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58104 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58112 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58120 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58128 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58136 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58144 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58152 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58160 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58168 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58176 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58184 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58192 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58200 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58208 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58216 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58224 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58232 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.942 [2024-11-07 10:51:18.767601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.942 [2024-11-07 10:51:18.767609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58240 len:8 PRP1 0x0 PRP2 0x0 00:22:57.942 [2024-11-07 10:51:18.767618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.942 [2024-11-07 10:51:18.767628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58248 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58256 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58264 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58272 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58280 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58288 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58296 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58304 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57672 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57680 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.767970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58312 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.767989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.767998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:57.943 [2024-11-07 10:51:18.768006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:57.943 [2024-11-07 10:51:18.768014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58320 len:8 PRP1 0x0 PRP2 0x0 00:22:57.943 [2024-11-07 10:51:18.768022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.768077] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:57.943 [2024-11-07 10:51:18.768113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.943 [2024-11-07 10:51:18.768125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.768137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.943 [2024-11-07 10:51:18.768146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.768156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.943 [2024-11-07 10:51:18.768165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.768175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:57.943 [2024-11-07 10:51:18.768185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:57.943 [2024-11-07 10:51:18.768194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:57.943 [2024-11-07 10:51:18.768237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeac350 (9): Bad file descriptor 00:22:57.943 [2024-11-07 10:51:18.772261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:57.943 [2024-11-07 10:51:18.800922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:57.943 10646.50 IOPS, 41.59 MiB/s [2024-11-07T09:51:25.614Z] 10685.82 IOPS, 41.74 MiB/s [2024-11-07T09:51:25.614Z] 10709.75 IOPS, 41.83 MiB/s [2024-11-07T09:51:25.614Z] 10737.38 IOPS, 41.94 MiB/s [2024-11-07T09:51:25.614Z] 10763.50 IOPS, 42.04 MiB/s 00:22:57.943 Latency(us) 00:22:57.943 [2024-11-07T09:51:25.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.943 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:57.943 Verification LBA range: start 0x0 length 0x4000 00:22:57.943 NVMe0n1 : 15.00 10781.21 42.11 706.79 0.00 11119.93 430.97 28607.89 00:22:57.943 [2024-11-07T09:51:25.614Z] =================================================================================================================== 00:22:57.943 [2024-11-07T09:51:25.614Z] Total : 10781.21 42.11 706.79 0.00 11119.93 430.97 28607.89 00:22:57.943 Received shutdown signal, test time was about 15.000000 seconds 00:22:57.943 00:22:57.943 Latency(us) 00:22:57.943 [2024-11-07T09:51:25.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.943 [2024-11-07T09:51:25.614Z] =================================================================================================================== 00:22:57.943 [2024-11-07T09:51:25.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2782368 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2782368 /var/tmp/bdevperf.sock 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2782368 ']' 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.943 10:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.943 10:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:57.943 10:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:22:57.943 10:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.943 [2024-11-07 10:51:25.397862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.943 10:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:58.201 [2024-11-07 10:51:25.598426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:58.201 10:51:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:58.459 NVMe0n1 00:22:58.459 10:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:59.023 00:22:59.024 10:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:59.024 00:22:59.281 10:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.281 10:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:59.281 10:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.538 10:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:02.817 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:02.817 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:02.817 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:02.817 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2783189 00:23:02.817 10:51:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2783189 00:23:03.749 { 00:23:03.749 "results": [ 00:23:03.749 { 00:23:03.749 "job": "NVMe0n1", 00:23:03.749 "core_mask": "0x1", 00:23:03.749 "workload": "verify", 00:23:03.749 "status": "finished", 00:23:03.749 "verify_range": { 00:23:03.749 "start": 0, 00:23:03.749 "length": 16384 00:23:03.749 }, 00:23:03.749 "queue_depth": 128, 00:23:03.749 "io_size": 4096, 00:23:03.749 "runtime": 1.005928, 00:23:03.749 "iops": 11105.168560771745, 00:23:03.749 "mibps": 43.37956469051463, 00:23:03.749 "io_failed": 0, 00:23:03.749 "io_timeout": 0, 00:23:03.749 "avg_latency_us": 11465.793038963464, 00:23:03.749 "min_latency_us": 1282.2260869565218, 00:23:03.749 "max_latency_us": 10257.808695652175 00:23:03.749 } 00:23:03.749 ], 00:23:03.749 "core_count": 1 00:23:03.749 } 00:23:04.007 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.007 [2024-11-07 10:51:25.029076] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:23:04.007 [2024-11-07 10:51:25.029128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2782368 ] 00:23:04.007 [2024-11-07 10:51:25.093073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.007 [2024-11-07 10:51:25.131128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.007 [2024-11-07 10:51:27.066854] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:04.007 [2024-11-07 10:51:27.066901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.007 [2024-11-07 10:51:27.066912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.007 [2024-11-07 10:51:27.066921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.007 [2024-11-07 10:51:27.066928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.007 [2024-11-07 10:51:27.066936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.007 [2024-11-07 10:51:27.066943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.007 [2024-11-07 10:51:27.066951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.007 [2024-11-07 10:51:27.066957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.007 [2024-11-07 10:51:27.066964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:04.007 [2024-11-07 10:51:27.066988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:04.007 [2024-11-07 10:51:27.067002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6f350 (9): Bad file descriptor 00:23:04.007 [2024-11-07 10:51:27.116414] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:04.007 Running I/O for 1 seconds... 00:23:04.007 11043.00 IOPS, 43.14 MiB/s 00:23:04.007 Latency(us) 00:23:04.008 [2024-11-07T09:51:31.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.008 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:04.008 Verification LBA range: start 0x0 length 0x4000 00:23:04.008 NVMe0n1 : 1.01 11105.17 43.38 0.00 0.00 11465.79 1282.23 10257.81 00:23:04.008 [2024-11-07T09:51:31.679Z] =================================================================================================================== 00:23:04.008 [2024-11-07T09:51:31.679Z] Total : 11105.17 43.38 0.00 0.00 11465.79 1282.23 10257.81 00:23:04.008 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.008 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:04.008 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.265 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.265 10:51:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:04.522 10:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.779 10:51:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2782368 ']' 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2782368' 00:23:08.057 killing process with pid 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2782368 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:08.057 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.315 rmmod nvme_tcp 00:23:08.315 rmmod nvme_fabrics 00:23:08.315 rmmod nvme_keyring 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2778970 ']' 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2778970 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2778970 ']' 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2778970 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:08.315 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2778970 00:23:08.573 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:08.573 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:08.573 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2778970' 00:23:08.573 killing process with pid 2778970 00:23:08.573 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2778970 00:23:08.573 10:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2778970 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.573 10:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.105 00:23:11.105 real 0m36.627s 00:23:11.105 user 1m58.104s 00:23:11.105 sys 0m7.270s 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.105 ************************************ 00:23:11.105 END TEST nvmf_failover 00:23:11.105 ************************************ 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.105 ************************************ 00:23:11.105 START TEST nvmf_host_discovery 00:23:11.105 ************************************ 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.105 * Looking for test storage... 00:23:11.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:11.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.105 --rc genhtml_branch_coverage=1 00:23:11.105 --rc genhtml_function_coverage=1 00:23:11.105 --rc genhtml_legend=1 00:23:11.105 --rc geninfo_all_blocks=1 00:23:11.105 --rc geninfo_unexecuted_blocks=1 00:23:11.105 00:23:11.105 ' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:11.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.105 --rc genhtml_branch_coverage=1 00:23:11.105 --rc genhtml_function_coverage=1 00:23:11.105 --rc genhtml_legend=1 00:23:11.105 --rc geninfo_all_blocks=1 00:23:11.105 --rc geninfo_unexecuted_blocks=1 00:23:11.105 00:23:11.105 ' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:11.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.105 --rc genhtml_branch_coverage=1 00:23:11.105 --rc genhtml_function_coverage=1 00:23:11.105 --rc genhtml_legend=1 00:23:11.105 --rc geninfo_all_blocks=1 00:23:11.105 --rc geninfo_unexecuted_blocks=1 00:23:11.105 00:23:11.105 ' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:11.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.105 --rc genhtml_branch_coverage=1 00:23:11.105 --rc genhtml_function_coverage=1 00:23:11.105 --rc genhtml_legend=1 00:23:11.105 --rc geninfo_all_blocks=1 00:23:11.105 --rc geninfo_unexecuted_blocks=1 00:23:11.105 00:23:11.105 ' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:11.105 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.106 10:51:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:16.372 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:16.372 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:16.372 Found net devices under 0000:86:00.0: cvl_0_0 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:16.372 Found net devices under 0000:86:00.1: cvl_0_1 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:16.372 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.373 10:51:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.373 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.373 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.373 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.373 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:23:16.631 00:23:16.631 --- 10.0.0.2 ping statistics --- 00:23:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.631 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:16.631 00:23:16.631 --- 10.0.0.1 ping statistics --- 00:23:16.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.631 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2787527 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2787527 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2787527 ']' 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.631 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:16.632 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.632 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:16.632 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.632 [2024-11-07 10:51:44.221133] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:23:16.632 [2024-11-07 10:51:44.221193] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.632 [2024-11-07 10:51:44.287943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.890 [2024-11-07 10:51:44.330910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.890 [2024-11-07 10:51:44.330944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.890 [2024-11-07 10:51:44.330954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.890 [2024-11-07 10:51:44.330960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.890 [2024-11-07 10:51:44.330966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.890 [2024-11-07 10:51:44.331551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 [2024-11-07 10:51:44.462408] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 [2024-11-07 10:51:44.474609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 null0 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 null1 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2787693 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2787693 /tmp/host.sock 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2787693 ']' 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:16.890 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:16.891 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:16.891 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:16.891 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:16.891 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.891 [2024-11-07 10:51:44.543901] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:23:16.891 [2024-11-07 10:51:44.543945] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2787693 ] 00:23:17.149 [2024-11-07 10:51:44.602755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.149 [2024-11-07 10:51:44.646790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.149 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 [2024-11-07 10:51:45.052092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.407 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.665 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:23:17.666 10:51:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:18.231 [2024-11-07 10:51:45.768809] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.231 [2024-11-07 10:51:45.768828] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.231 [2024-11-07 10:51:45.768840] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.231 [2024-11-07 10:51:45.855094] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:18.489 [2024-11-07 10:51:45.949885] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:18.489 [2024-11-07 10:51:45.950627] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1db4e10:1 started. 00:23:18.489 [2024-11-07 10:51:45.952056] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.489 [2024-11-07 10:51:45.952072] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.489 [2024-11-07 10:51:45.957603] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1db4e10 was disconnected and freed. delete nvme_qpair. 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.746 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.747 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.005 [2024-11-07 10:51:46.586601] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1db5670:1 started. 00:23:19.005 [2024-11-07 10:51:46.589049] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1db5670 was disconnected and freed. delete nvme_qpair. 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.005 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.005 [2024-11-07 10:51:46.672451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.263 [2024-11-07 10:51:46.672911] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.263 [2024-11-07 10:51:46.672930] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:19.263 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.264 [2024-11-07 10:51:46.759516] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:19.264 10:51:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:19.522 [2024-11-07 10:51:47.065950] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:19.522 [2024-11-07 10:51:47.065984] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.522 [2024-11-07 10:51:47.065992] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.522 [2024-11-07 10:51:47.065997] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 [2024-11-07 10:51:47.920258] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.457 [2024-11-07 10:51:47.920278] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.457 [2024-11-07 10:51:47.920952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-07 10:51:47.920967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-07 10:51:47.920975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-07 10:51:47.920982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-07 10:51:47.920990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-07 10:51:47.920996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-07 10:51:47.921003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.457 [2024-11-07 10:51:47.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.457 [2024-11-07 10:51:47.921016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:20.457 [2024-11-07 10:51:47.930963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:20.457 [2024-11-07 10:51:47.941002] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.457 [2024-11-07 10:51:47.941016] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.457 [2024-11-07 10:51:47.941020] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.457 [2024-11-07 10:51:47.941025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.457 [2024-11-07 10:51:47.941043] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.457 [2024-11-07 10:51:47.941305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.457 [2024-11-07 10:51:47.941319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.457 [2024-11-07 10:51:47.941328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.457 [2024-11-07 10:51:47.941339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.457 [2024-11-07 10:51:47.941356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.457 [2024-11-07 10:51:47.941364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.457 [2024-11-07 10:51:47.941372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.457 [2024-11-07 10:51:47.941378] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.457 [2024-11-07 10:51:47.941383] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.457 [2024-11-07 10:51:47.941388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.457 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.457 [2024-11-07 10:51:47.951074] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.457 [2024-11-07 10:51:47.951085] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.457 [2024-11-07 10:51:47.951089] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.457 [2024-11-07 10:51:47.951093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.457 [2024-11-07 10:51:47.951106] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.457 [2024-11-07 10:51:47.951293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.457 [2024-11-07 10:51:47.951311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.457 [2024-11-07 10:51:47.951318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.458 [2024-11-07 10:51:47.951332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.458 [2024-11-07 10:51:47.951343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.458 [2024-11-07 10:51:47.951349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.458 [2024-11-07 10:51:47.951356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.458 [2024-11-07 10:51:47.951362] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.458 [2024-11-07 10:51:47.951366] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.458 [2024-11-07 10:51:47.951370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.458 [2024-11-07 10:51:47.961138] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.458 [2024-11-07 10:51:47.961151] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.458 [2024-11-07 10:51:47.961155] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.961159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.458 [2024-11-07 10:51:47.961173] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.961422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.458 [2024-11-07 10:51:47.961443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.458 [2024-11-07 10:51:47.961452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.458 [2024-11-07 10:51:47.961463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.458 [2024-11-07 10:51:47.961479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.458 [2024-11-07 10:51:47.961486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.458 [2024-11-07 10:51:47.961493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.458 [2024-11-07 10:51:47.961499] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.458 [2024-11-07 10:51:47.961503] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.458 [2024-11-07 10:51:47.961507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.458 [2024-11-07 10:51:47.971205] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.458 [2024-11-07 10:51:47.971219] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.458 [2024-11-07 10:51:47.971223] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.971229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.458 [2024-11-07 10:51:47.971244] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.971556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.458 [2024-11-07 10:51:47.971570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.458 [2024-11-07 10:51:47.971581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.458 [2024-11-07 10:51:47.971592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.458 [2024-11-07 10:51:47.971614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.458 [2024-11-07 10:51:47.971622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.458 [2024-11-07 10:51:47.971629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.458 [2024-11-07 10:51:47.971635] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.458 [2024-11-07 10:51:47.971640] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.458 [2024-11-07 10:51:47.971654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.458 [2024-11-07 10:51:47.981274] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.458 [2024-11-07 10:51:47.981286] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.458 [2024-11-07 10:51:47.981292] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.981296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.458 [2024-11-07 10:51:47.981309] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.458 [2024-11-07 10:51:47.981495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.458 [2024-11-07 10:51:47.981509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.458 [2024-11-07 10:51:47.981518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.458 [2024-11-07 10:51:47.981530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.458 [2024-11-07 10:51:47.981543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.458 [2024-11-07 10:51:47.981551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.458 [2024-11-07 10:51:47.981562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.458 [2024-11-07 10:51:47.981567] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.458 [2024-11-07 10:51:47.981572] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.458 [2024-11-07 10:51:47.981577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.458 10:51:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.458 [2024-11-07 10:51:47.991340] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.458 [2024-11-07 10:51:47.991356] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.458 [2024-11-07 10:51:47.991360] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.991364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.458 [2024-11-07 10:51:47.991378] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:47.991572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.458 [2024-11-07 10:51:47.991584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.458 [2024-11-07 10:51:47.991592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.458 [2024-11-07 10:51:47.991603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.458 [2024-11-07 10:51:47.991613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.458 [2024-11-07 10:51:47.991619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.458 [2024-11-07 10:51:47.991626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.458 [2024-11-07 10:51:47.991631] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.458 [2024-11-07 10:51:47.991636] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.458 [2024-11-07 10:51:47.991640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.458 [2024-11-07 10:51:48.001408] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:20.458 [2024-11-07 10:51:48.001419] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:20.458 [2024-11-07 10:51:48.001423] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:48.001427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:20.458 [2024-11-07 10:51:48.001444] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:20.458 [2024-11-07 10:51:48.001650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.459 [2024-11-07 10:51:48.001661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d85390 with addr=10.0.0.2, port=4420 00:23:20.459 [2024-11-07 10:51:48.001669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d85390 is same with the state(6) to be set 00:23:20.459 [2024-11-07 10:51:48.001679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d85390 (9): Bad file descriptor 00:23:20.459 [2024-11-07 10:51:48.001692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:20.459 [2024-11-07 10:51:48.001698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:20.459 [2024-11-07 10:51:48.001705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:20.459 [2024-11-07 10:51:48.001710] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:20.459 [2024-11-07 10:51:48.001715] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:20.459 [2024-11-07 10:51:48.001719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:20.459 [2024-11-07 10:51:48.007327] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:20.459 [2024-11-07 10:51:48.007344] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.459 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.717 10:51:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.649 [2024-11-07 10:51:49.287550] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:21.649 [2024-11-07 10:51:49.287567] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:21.649 [2024-11-07 10:51:49.287577] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:21.907 [2024-11-07 10:51:49.373840] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:22.165 [2024-11-07 10:51:49.638111] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:22.165 [2024-11-07 10:51:49.638684] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d847f0:1 started. 00:23:22.165 [2024-11-07 10:51:49.640231] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:22.165 [2024-11-07 10:51:49.640256] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.165 [2024-11-07 10:51:49.647251] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.165 d847f0 was disconnected and freed. delete nvme_qpair. 00:23:22.165 request: 00:23:22.165 { 00:23:22.165 "name": "nvme", 00:23:22.165 "trtype": "tcp", 00:23:22.165 "traddr": "10.0.0.2", 00:23:22.165 "adrfam": "ipv4", 00:23:22.165 "trsvcid": "8009", 00:23:22.165 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:22.165 "wait_for_attach": true, 00:23:22.165 "method": "bdev_nvme_start_discovery", 00:23:22.165 "req_id": 1 00:23:22.165 } 00:23:22.165 Got JSON-RPC error response 00:23:22.165 response: 00:23:22.165 { 00:23:22.165 "code": -17, 00:23:22.165 "message": "File exists" 00:23:22.165 } 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.165 request: 00:23:22.165 { 00:23:22.165 "name": "nvme_second", 00:23:22.165 "trtype": "tcp", 00:23:22.165 "traddr": "10.0.0.2", 00:23:22.165 "adrfam": "ipv4", 00:23:22.165 "trsvcid": "8009", 00:23:22.165 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:22.165 "wait_for_attach": true, 00:23:22.165 "method": "bdev_nvme_start_discovery", 00:23:22.165 "req_id": 1 00:23:22.165 } 00:23:22.165 Got JSON-RPC error response 00:23:22.165 response: 00:23:22.165 { 00:23:22.165 "code": -17, 00:23:22.165 "message": "File exists" 00:23:22.165 } 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:22.165 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:22.166 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.424 10:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.358 [2024-11-07 10:51:50.883747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:23.358 [2024-11-07 10:51:50.883782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db47a0 with addr=10.0.0.2, port=8010 00:23:23.358 [2024-11-07 10:51:50.883800] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:23.358 [2024-11-07 10:51:50.883807] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:23.358 [2024-11-07 10:51:50.883814] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:24.290 [2024-11-07 10:51:51.886125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.290 [2024-11-07 10:51:51.886150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d84330 with addr=10.0.0.2, port=8010 00:23:24.290 [2024-11-07 10:51:51.886161] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:24.290 [2024-11-07 10:51:51.886168] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:24.290 [2024-11-07 10:51:51.886174] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:25.223 [2024-11-07 10:51:52.888325] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:25.223 request: 00:23:25.223 { 00:23:25.223 "name": "nvme_second", 00:23:25.223 "trtype": "tcp", 00:23:25.223 "traddr": "10.0.0.2", 00:23:25.223 "adrfam": "ipv4", 00:23:25.481 "trsvcid": "8010", 00:23:25.481 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:25.481 "wait_for_attach": false, 00:23:25.481 "attach_timeout_ms": 3000, 00:23:25.481 "method": "bdev_nvme_start_discovery", 00:23:25.481 "req_id": 1 00:23:25.481 } 00:23:25.481 Got JSON-RPC error response 00:23:25.481 response: 00:23:25.481 { 00:23:25.481 "code": -110, 00:23:25.481 "message": "Connection timed out" 00:23:25.481 } 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2787693 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.481 rmmod nvme_tcp 00:23:25.481 rmmod nvme_fabrics 00:23:25.481 rmmod nvme_keyring 00:23:25.481 10:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2787527 ']' 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2787527 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2787527 ']' 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2787527 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2787527 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2787527' 00:23:25.481 killing process with pid 2787527 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2787527 00:23:25.481 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2787527 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.740 10:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.641 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:27.641 00:23:27.641 real 0m16.985s 00:23:27.641 user 0m20.617s 00:23:27.641 sys 0m5.530s 00:23:27.642 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:27.642 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.642 ************************************ 00:23:27.642 END TEST nvmf_host_discovery 00:23:27.642 ************************************ 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.900 ************************************ 00:23:27.900 START TEST nvmf_host_multipath_status 00:23:27.900 ************************************ 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:27.900 * Looking for test storage... 00:23:27.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.900 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:27.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.901 --rc genhtml_branch_coverage=1 00:23:27.901 --rc genhtml_function_coverage=1 00:23:27.901 --rc genhtml_legend=1 00:23:27.901 --rc geninfo_all_blocks=1 00:23:27.901 --rc geninfo_unexecuted_blocks=1 00:23:27.901 00:23:27.901 ' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:27.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.901 --rc genhtml_branch_coverage=1 00:23:27.901 --rc genhtml_function_coverage=1 00:23:27.901 --rc genhtml_legend=1 00:23:27.901 --rc geninfo_all_blocks=1 00:23:27.901 --rc geninfo_unexecuted_blocks=1 00:23:27.901 00:23:27.901 ' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:27.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.901 --rc genhtml_branch_coverage=1 00:23:27.901 --rc genhtml_function_coverage=1 00:23:27.901 --rc genhtml_legend=1 00:23:27.901 --rc geninfo_all_blocks=1 00:23:27.901 --rc geninfo_unexecuted_blocks=1 00:23:27.901 00:23:27.901 ' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:27.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.901 --rc genhtml_branch_coverage=1 00:23:27.901 --rc genhtml_function_coverage=1 00:23:27.901 --rc genhtml_legend=1 00:23:27.901 --rc geninfo_all_blocks=1 00:23:27.901 --rc geninfo_unexecuted_blocks=1 00:23:27.901 00:23:27.901 ' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.901 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:27.902 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:27.902 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:27.902 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.159 10:51:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:34.781 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:34.781 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:34.781 Found net devices under 0000:86:00.0: cvl_0_0 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:34.781 Found net devices under 0000:86:00.1: cvl_0_1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.781 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:23:34.782 00:23:34.782 --- 10.0.0.2 ping statistics --- 00:23:34.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.782 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:23:34.782 00:23:34.782 --- 10.0.0.1 ping statistics --- 00:23:34.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.782 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2792778 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2792778 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2792778 ']' 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.782 [2024-11-07 10:52:01.555069] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:23:34.782 [2024-11-07 10:52:01.555118] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.782 [2024-11-07 10:52:01.623286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:34.782 [2024-11-07 10:52:01.662970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.782 [2024-11-07 10:52:01.663006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.782 [2024-11-07 10:52:01.663014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.782 [2024-11-07 10:52:01.663020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.782 [2024-11-07 10:52:01.663025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.782 [2024-11-07 10:52:01.664237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.782 [2024-11-07 10:52:01.664240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2792778 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:34.782 [2024-11-07 10:52:01.956406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.782 10:52:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:34.782 Malloc0 00:23:34.782 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:34.782 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.040 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.298 [2024-11-07 10:52:02.718157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:35.298 [2024-11-07 10:52:02.914644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2793045 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2793045 /var/tmp/bdevperf.sock 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2793045 ']' 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.298 10:52:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:35.565 10:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:35.565 10:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:35.565 10:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:35.823 10:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:36.388 Nvme0n1 00:23:36.388 10:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:36.647 Nvme0n1 00:23:36.647 10:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:36.647 10:52:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.549 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:38.549 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:38.807 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:39.067 10:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:40.001 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:40.001 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.001 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.001 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.260 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.260 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.260 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.260 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.519 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.519 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.519 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.519 10:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.519 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.519 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.519 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.519 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.778 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.778 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.778 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.778 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.037 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.037 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.037 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.037 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.296 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.296 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:41.296 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:41.555 10:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.555 10:52:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.936 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.196 10:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.455 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.455 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.455 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.455 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.714 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.714 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.714 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.714 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.973 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.973 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:43.973 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.973 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:44.232 10:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:45.609 10:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:45.609 10:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:45.609 10:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.609 10:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.609 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.868 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.868 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.868 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.868 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.127 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.127 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:46.127 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.127 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.387 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.387 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:46.387 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.387 10:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.387 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.387 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:46.387 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.646 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.905 10:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:47.841 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:47.841 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:47.841 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.841 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.100 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.100 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:48.100 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.100 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.358 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.358 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.358 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.358 10:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.617 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.617 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.617 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.617 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.876 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.135 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.135 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:49.135 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.393 10:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:49.652 10:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:50.587 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:50.587 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.587 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.587 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.846 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.846 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:50.846 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.846 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.105 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.363 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.363 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.363 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.363 10:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.622 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.622 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:51.622 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.622 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.880 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.880 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:51.881 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:51.881 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.138 10:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:53.073 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:53.073 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:53.073 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.073 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.331 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.331 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.331 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.331 10:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.590 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.590 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.590 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.590 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.849 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.849 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.849 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.849 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.107 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.366 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.366 10:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:54.624 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:54.624 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:54.883 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.141 10:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:56.123 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:56.123 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:56.123 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.123 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.381 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.381 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:56.381 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.381 10:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.381 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.381 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.381 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.381 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.639 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.639 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.639 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.639 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.897 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.897 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.897 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.897 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.155 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.155 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.155 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.155 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.413 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.413 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:57.413 10:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:57.671 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.671 10:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.046 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.304 10:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.562 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.562 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.562 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.562 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.821 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.821 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.821 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.821 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.079 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.079 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:00.079 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.079 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:00.337 10:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:01.711 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:01.711 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:01.711 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.711 10:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.711 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:01.968 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.968 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:01.968 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.968 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.225 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.225 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.225 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.225 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:02.483 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.483 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:02.483 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.483 10:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:02.741 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.741 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:02.741 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.000 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:03.000 10:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.374 10:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:04.374 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.374 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:04.374 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.374 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:04.632 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.632 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:04.632 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.632 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:04.890 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.890 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:04.890 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.890 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:05.148 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.148 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:05.148 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.148 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2793045 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2793045 ']' 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2793045 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2793045 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2793045' 00:24:05.406 killing process with pid 2793045 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2793045 00:24:05.406 10:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2793045 00:24:05.406 { 00:24:05.406 "results": [ 00:24:05.406 { 00:24:05.406 "job": "Nvme0n1", 00:24:05.406 "core_mask": "0x4", 00:24:05.406 "workload": "verify", 00:24:05.406 "status": "terminated", 00:24:05.406 "verify_range": { 00:24:05.406 "start": 0, 00:24:05.406 "length": 16384 00:24:05.406 }, 00:24:05.406 "queue_depth": 128, 00:24:05.406 "io_size": 4096, 00:24:05.406 "runtime": 28.658895, 00:24:05.406 "iops": 10342.582992121643, 00:24:05.406 "mibps": 40.40071481297517, 00:24:05.406 "io_failed": 0, 00:24:05.406 "io_timeout": 0, 00:24:05.406 "avg_latency_us": 12356.113119360994, 00:24:05.406 "min_latency_us": 1104.1391304347826, 00:24:05.406 "max_latency_us": 3019898.88 00:24:05.406 } 00:24:05.406 ], 00:24:05.406 "core_count": 1 00:24:05.406 } 00:24:05.668 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2793045 00:24:05.668 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.668 [2024-11-07 10:52:02.979285] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:24:05.668 [2024-11-07 10:52:02.979341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793045 ] 00:24:05.668 [2024-11-07 10:52:03.039878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.668 [2024-11-07 10:52:03.083007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.668 Running I/O for 90 seconds... 00:24:05.668 11161.00 IOPS, 43.60 MiB/s [2024-11-07T09:52:33.339Z] 11118.00 IOPS, 43.43 MiB/s [2024-11-07T09:52:33.339Z] 11137.67 IOPS, 43.51 MiB/s [2024-11-07T09:52:33.339Z] 11130.00 IOPS, 43.48 MiB/s [2024-11-07T09:52:33.339Z] 11117.60 IOPS, 43.43 MiB/s [2024-11-07T09:52:33.339Z] 11099.50 IOPS, 43.36 MiB/s [2024-11-07T09:52:33.339Z] 11092.43 IOPS, 43.33 MiB/s [2024-11-07T09:52:33.339Z] 11086.25 IOPS, 43.31 MiB/s [2024-11-07T09:52:33.339Z] 11117.00 IOPS, 43.43 MiB/s [2024-11-07T09:52:33.339Z] 11098.10 IOPS, 43.35 MiB/s [2024-11-07T09:52:33.339Z] 11091.09 IOPS, 43.32 MiB/s [2024-11-07T09:52:33.339Z] 11122.17 IOPS, 43.45 MiB/s [2024-11-07T09:52:33.339Z] [2024-11-07 10:52:16.900394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.668 [2024-11-07 10:52:16.900440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.900982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.900988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.901001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.901008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.901020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.901027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.901039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.901046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.901058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.668 [2024-11-07 10:52:16.901064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.668 [2024-11-07 10:52:16.901077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.901980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.901986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:05.669 [2024-11-07 10:52:16.902381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.669 [2024-11-07 10:52:16.902388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.670 [2024-11-07 10:52:16.902432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.670 [2024-11-07 10:52:16.902461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.670 [2024-11-07 10:52:16.902559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.902977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.902984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.670 [2024-11-07 10:52:16.903444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.670 [2024-11-07 10:52:16.903463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:16.903953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:16.903960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.671 10847.00 IOPS, 42.37 MiB/s [2024-11-07T09:52:33.342Z] 10072.21 IOPS, 39.34 MiB/s [2024-11-07T09:52:33.342Z] 9400.73 IOPS, 36.72 MiB/s [2024-11-07T09:52:33.342Z] 9029.69 IOPS, 35.27 MiB/s [2024-11-07T09:52:33.342Z] 9149.24 IOPS, 35.74 MiB/s [2024-11-07T09:52:33.342Z] 9245.11 IOPS, 36.11 MiB/s [2024-11-07T09:52:33.342Z] 9435.63 IOPS, 36.86 MiB/s [2024-11-07T09:52:33.342Z] 9643.45 IOPS, 37.67 MiB/s [2024-11-07T09:52:33.342Z] 9813.67 IOPS, 38.33 MiB/s [2024-11-07T09:52:33.342Z] 9878.73 IOPS, 38.59 MiB/s [2024-11-07T09:52:33.342Z] 9927.91 IOPS, 38.78 MiB/s [2024-11-07T09:52:33.342Z] 9991.21 IOPS, 39.03 MiB/s [2024-11-07T09:52:33.342Z] 10124.48 IOPS, 39.55 MiB/s [2024-11-07T09:52:33.342Z] 10246.15 IOPS, 40.02 MiB/s [2024-11-07T09:52:33.342Z] [2024-11-07 10:52:30.608039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.671 [2024-11-07 10:52:30.608410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.671 [2024-11-07 10:52:30.608423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.672 [2024-11-07 10:52:30.608674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.608747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.608754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.672 [2024-11-07 10:52:30.609398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:05.672 [2024-11-07 10:52:30.609411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.609523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.609542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.609555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.610685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.610705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.610724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:05.673 [2024-11-07 10:52:30.610746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:05.673 [2024-11-07 10:52:30.610798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:05.673 [2024-11-07 10:52:30.610804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:05.673 10297.00 IOPS, 40.22 MiB/s [2024-11-07T09:52:33.344Z] 10325.96 IOPS, 40.34 MiB/s [2024-11-07T09:52:33.344Z] Received shutdown signal, test time was about 28.659550 seconds 00:24:05.673 00:24:05.673 Latency(us) 00:24:05.673 [2024-11-07T09:52:33.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.673 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.673 Verification LBA range: start 0x0 length 0x4000 00:24:05.673 Nvme0n1 : 28.66 10342.58 40.40 0.00 0.00 12356.11 1104.14 3019898.88 00:24:05.673 [2024-11-07T09:52:33.344Z] =================================================================================================================== 00:24:05.673 [2024-11-07T09:52:33.344Z] Total : 10342.58 40.40 0.00 0.00 12356.11 1104.14 3019898.88 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.673 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.673 rmmod nvme_tcp 00:24:05.673 rmmod nvme_fabrics 00:24:05.673 rmmod nvme_keyring 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2792778 ']' 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2792778 ']' 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2792778' 00:24:05.932 killing process with pid 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2792778 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.932 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.933 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.933 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.933 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.933 10:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.465 00:24:08.465 real 0m40.280s 00:24:08.465 user 1m48.899s 00:24:08.465 sys 0m11.463s 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.465 ************************************ 00:24:08.465 END TEST nvmf_host_multipath_status 00:24:08.465 ************************************ 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.465 ************************************ 00:24:08.465 START TEST nvmf_discovery_remove_ifc 00:24:08.465 ************************************ 00:24:08.465 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:08.465 * Looking for test storage... 00:24:08.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:08.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.466 --rc genhtml_branch_coverage=1 00:24:08.466 --rc genhtml_function_coverage=1 00:24:08.466 --rc genhtml_legend=1 00:24:08.466 --rc geninfo_all_blocks=1 00:24:08.466 --rc geninfo_unexecuted_blocks=1 00:24:08.466 00:24:08.466 ' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:08.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.466 --rc genhtml_branch_coverage=1 00:24:08.466 --rc genhtml_function_coverage=1 00:24:08.466 --rc genhtml_legend=1 00:24:08.466 --rc geninfo_all_blocks=1 00:24:08.466 --rc geninfo_unexecuted_blocks=1 00:24:08.466 00:24:08.466 ' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:08.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.466 --rc genhtml_branch_coverage=1 00:24:08.466 --rc genhtml_function_coverage=1 00:24:08.466 --rc genhtml_legend=1 00:24:08.466 --rc geninfo_all_blocks=1 00:24:08.466 --rc geninfo_unexecuted_blocks=1 00:24:08.466 00:24:08.466 ' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:08.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.466 --rc genhtml_branch_coverage=1 00:24:08.466 --rc genhtml_function_coverage=1 00:24:08.466 --rc genhtml_legend=1 00:24:08.466 --rc geninfo_all_blocks=1 00:24:08.466 --rc geninfo_unexecuted_blocks=1 00:24:08.466 00:24:08.466 ' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.466 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.467 10:52:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.734 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.734 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.735 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.735 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:13.735 00:24:13.735 --- 10.0.0.2 ping statistics --- 00:24:13.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.735 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:24:13.735 00:24:13.735 --- 10.0.0.1 ping statistics --- 00:24:13.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.735 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2801578 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2801578 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2801578 ']' 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:13.735 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:13.994 [2024-11-07 10:52:41.452352] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:24:13.994 [2024-11-07 10:52:41.452400] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.994 [2024-11-07 10:52:41.519098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.994 [2024-11-07 10:52:41.559778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.994 [2024-11-07 10:52:41.559815] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.994 [2024-11-07 10:52:41.559822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.994 [2024-11-07 10:52:41.559828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.994 [2024-11-07 10:52:41.559833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.994 [2024-11-07 10:52:41.560390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.994 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.252 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.252 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:14.252 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.252 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.252 [2024-11-07 10:52:41.698266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.253 [2024-11-07 10:52:41.706447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:14.253 null0 00:24:14.253 [2024-11-07 10:52:41.738431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2801662 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2801662 /tmp/host.sock 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2801662 ']' 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:14.253 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.253 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:14.253 [2024-11-07 10:52:41.806752] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:24:14.253 [2024-11-07 10:52:41.806795] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801662 ] 00:24:14.253 [2024-11-07 10:52:41.868400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.253 [2024-11-07 10:52:41.910991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.511 10:52:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.511 10:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.511 10:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:14.511 10:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.511 10:52:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.444 [2024-11-07 10:52:43.095961] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:15.444 [2024-11-07 10:52:43.095980] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:15.444 [2024-11-07 10:52:43.095996] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:15.701 [2024-11-07 10:52:43.222396] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:15.958 [2024-11-07 10:52:43.404380] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:15.958 [2024-11-07 10:52:43.405179] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2547a30:1 started. 00:24:15.958 [2024-11-07 10:52:43.406555] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:15.958 [2024-11-07 10:52:43.406597] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:15.958 [2024-11-07 10:52:43.406614] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:15.958 [2024-11-07 10:52:43.406627] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:15.958 [2024-11-07 10:52:43.406643] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:15.958 [2024-11-07 10:52:43.456001] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2547a30 was disconnected and freed. delete nvme_qpair. 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.958 10:52:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.393 10:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.337 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.338 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.338 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.338 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.338 10:52:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:19.271 10:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:20.204 10:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.138 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.396 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.396 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:21.396 10:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.396 [2024-11-07 10:52:48.848293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:21.396 [2024-11-07 10:52:48.848333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.396 [2024-11-07 10:52:48.848345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.396 [2024-11-07 10:52:48.848353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.396 [2024-11-07 10:52:48.848360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.396 [2024-11-07 10:52:48.848368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.396 [2024-11-07 10:52:48.848374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.396 [2024-11-07 10:52:48.848381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.396 [2024-11-07 10:52:48.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.396 [2024-11-07 10:52:48.848395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.396 [2024-11-07 10:52:48.848402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.396 [2024-11-07 10:52:48.848409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524240 is same with the state(6) to be set 00:24:21.396 [2024-11-07 10:52:48.858315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2524240 (9): Bad file descriptor 00:24:21.396 [2024-11-07 10:52:48.868353] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:21.396 [2024-11-07 10:52:48.868365] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:21.396 [2024-11-07 10:52:48.868370] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:21.396 [2024-11-07 10:52:48.868374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:21.396 [2024-11-07 10:52:48.868397] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.327 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.327 [2024-11-07 10:52:49.875465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:22.327 [2024-11-07 10:52:49.875508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2524240 with addr=10.0.0.2, port=4420 00:24:22.327 [2024-11-07 10:52:49.875526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524240 is same with the state(6) to be set 00:24:22.327 [2024-11-07 10:52:49.875554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2524240 (9): Bad file descriptor 00:24:22.327 [2024-11-07 10:52:49.875964] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:22.327 [2024-11-07 10:52:49.875993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:22.327 [2024-11-07 10:52:49.876004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:22.328 [2024-11-07 10:52:49.876016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:22.328 [2024-11-07 10:52:49.876026] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:22.328 [2024-11-07 10:52:49.876033] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:22.328 [2024-11-07 10:52:49.876040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:22.328 [2024-11-07 10:52:49.876050] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:22.328 [2024-11-07 10:52:49.876057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:22.328 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.328 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:22.328 10:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.260 [2024-11-07 10:52:50.878539] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:23.260 [2024-11-07 10:52:50.878566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:23.260 [2024-11-07 10:52:50.878579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:23.260 [2024-11-07 10:52:50.878586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:23.260 [2024-11-07 10:52:50.878594] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:23.260 [2024-11-07 10:52:50.878601] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:23.260 [2024-11-07 10:52:50.878606] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:23.260 [2024-11-07 10:52:50.878610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:23.260 [2024-11-07 10:52:50.878632] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:23.260 [2024-11-07 10:52:50.878663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.260 [2024-11-07 10:52:50.878673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.260 [2024-11-07 10:52:50.878685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.260 [2024-11-07 10:52:50.878691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.260 [2024-11-07 10:52:50.878699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.260 [2024-11-07 10:52:50.878705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.260 [2024-11-07 10:52:50.878712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.260 [2024-11-07 10:52:50.878719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.260 [2024-11-07 10:52:50.878726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.260 [2024-11-07 10:52:50.878732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.260 [2024-11-07 10:52:50.878740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:23.260 [2024-11-07 10:52:50.878855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2513910 (9): Bad file descriptor 00:24:23.260 [2024-11-07 10:52:50.879867] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:23.260 [2024-11-07 10:52:50.879878] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.260 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.518 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:23.518 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.518 10:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:23.518 10:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.450 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.707 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:24.707 10:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.271 [2024-11-07 10:52:52.930577] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:25.271 [2024-11-07 10:52:52.930594] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:25.271 [2024-11-07 10:52:52.930608] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:25.529 [2024-11-07 10:52:53.018875] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:25.529 [2024-11-07 10:52:53.121613] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:25.529 [2024-11-07 10:52:53.122130] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2523330:1 started. 00:24:25.529 [2024-11-07 10:52:53.123189] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:25.529 [2024-11-07 10:52:53.123222] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:25.529 [2024-11-07 10:52:53.123239] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:25.529 [2024-11-07 10:52:53.123252] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:25.529 [2024-11-07 10:52:53.123260] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.529 [2024-11-07 10:52:53.128854] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2523330 was disconnected and freed. delete nvme_qpair. 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2801662 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2801662 ']' 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2801662 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:25.529 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2801662 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2801662' 00:24:25.787 killing process with pid 2801662 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2801662 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2801662 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.787 rmmod nvme_tcp 00:24:25.787 rmmod nvme_fabrics 00:24:25.787 rmmod nvme_keyring 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2801578 ']' 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2801578 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2801578 ']' 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2801578 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:25.787 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2801578 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2801578' 00:24:26.046 killing process with pid 2801578 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2801578 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2801578 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.046 10:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:28.580 00:24:28.580 real 0m19.986s 00:24:28.580 user 0m24.511s 00:24:28.580 sys 0m5.477s 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.580 ************************************ 00:24:28.580 END TEST nvmf_discovery_remove_ifc 00:24:28.580 ************************************ 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:28.580 10:52:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.580 ************************************ 00:24:28.580 START TEST nvmf_identify_kernel_target 00:24:28.580 ************************************ 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:28.581 * Looking for test storage... 00:24:28.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.581 --rc genhtml_branch_coverage=1 00:24:28.581 --rc genhtml_function_coverage=1 00:24:28.581 --rc genhtml_legend=1 00:24:28.581 --rc geninfo_all_blocks=1 00:24:28.581 --rc geninfo_unexecuted_blocks=1 00:24:28.581 00:24:28.581 ' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.581 --rc genhtml_branch_coverage=1 00:24:28.581 --rc genhtml_function_coverage=1 00:24:28.581 --rc genhtml_legend=1 00:24:28.581 --rc geninfo_all_blocks=1 00:24:28.581 --rc geninfo_unexecuted_blocks=1 00:24:28.581 00:24:28.581 ' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.581 --rc genhtml_branch_coverage=1 00:24:28.581 --rc genhtml_function_coverage=1 00:24:28.581 --rc genhtml_legend=1 00:24:28.581 --rc geninfo_all_blocks=1 00:24:28.581 --rc geninfo_unexecuted_blocks=1 00:24:28.581 00:24:28.581 ' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.581 --rc genhtml_branch_coverage=1 00:24:28.581 --rc genhtml_function_coverage=1 00:24:28.581 --rc genhtml_legend=1 00:24:28.581 --rc geninfo_all_blocks=1 00:24:28.581 --rc geninfo_unexecuted_blocks=1 00:24:28.581 00:24:28.581 ' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.581 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:28.582 10:52:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.845 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:33.846 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:33.846 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:33.846 Found net devices under 0000:86:00.0: cvl_0_0 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:33.846 Found net devices under 0000:86:00.1: cvl_0_1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:24:33.846 00:24:33.846 --- 10.0.0.2 ping statistics --- 00:24:33.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.846 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:24:33.846 00:24:33.846 --- 10.0.0.1 ping statistics --- 00:24:33.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.846 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.846 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:33.847 10:53:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:36.374 Waiting for block devices as requested 00:24:36.374 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:36.631 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:36.631 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:36.631 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:36.631 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:36.889 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:36.889 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:36.889 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:36.889 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:37.147 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:37.147 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:37.147 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:37.406 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:37.406 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:37.406 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:37.406 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:37.664 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:37.664 No valid GPT data, bailing 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:37.664 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:37.926 00:24:37.927 Discovery Log Number of Records 2, Generation counter 2 00:24:37.927 =====Discovery Log Entry 0====== 00:24:37.927 trtype: tcp 00:24:37.927 adrfam: ipv4 00:24:37.927 subtype: current discovery subsystem 00:24:37.927 treq: not specified, sq flow control disable supported 00:24:37.927 portid: 1 00:24:37.927 trsvcid: 4420 00:24:37.927 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:37.927 traddr: 10.0.0.1 00:24:37.927 eflags: none 00:24:37.927 sectype: none 00:24:37.927 =====Discovery Log Entry 1====== 00:24:37.927 trtype: tcp 00:24:37.927 adrfam: ipv4 00:24:37.927 subtype: nvme subsystem 00:24:37.927 treq: not specified, sq flow control disable supported 00:24:37.927 portid: 1 00:24:37.927 trsvcid: 4420 00:24:37.927 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:37.927 traddr: 10.0.0.1 00:24:37.927 eflags: none 00:24:37.927 sectype: none 00:24:37.927 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:37.927 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:37.927 ===================================================== 00:24:37.927 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:37.927 ===================================================== 00:24:37.927 Controller Capabilities/Features 00:24:37.927 ================================ 00:24:37.927 Vendor ID: 0000 00:24:37.927 Subsystem Vendor ID: 0000 00:24:37.927 Serial Number: 6dccd2d5f5c2266471bc 00:24:37.927 Model Number: Linux 00:24:37.927 Firmware Version: 6.8.9-20 00:24:37.927 Recommended Arb Burst: 0 00:24:37.927 IEEE OUI Identifier: 00 00 00 00:24:37.927 Multi-path I/O 00:24:37.927 May have multiple subsystem ports: No 00:24:37.927 May have multiple controllers: No 00:24:37.927 Associated with SR-IOV VF: No 00:24:37.927 Max Data Transfer Size: Unlimited 00:24:37.927 Max Number of Namespaces: 0 00:24:37.927 Max Number of I/O Queues: 1024 00:24:37.927 NVMe Specification Version (VS): 1.3 00:24:37.927 NVMe Specification Version (Identify): 1.3 00:24:37.927 Maximum Queue Entries: 1024 00:24:37.927 Contiguous Queues Required: No 00:24:37.927 Arbitration Mechanisms Supported 00:24:37.927 Weighted Round Robin: Not Supported 00:24:37.927 Vendor Specific: Not Supported 00:24:37.927 Reset Timeout: 7500 ms 00:24:37.927 Doorbell Stride: 4 bytes 00:24:37.927 NVM Subsystem Reset: Not Supported 00:24:37.927 Command Sets Supported 00:24:37.927 NVM Command Set: Supported 00:24:37.927 Boot Partition: Not Supported 00:24:37.927 Memory Page Size Minimum: 4096 bytes 00:24:37.927 Memory Page Size Maximum: 4096 bytes 00:24:37.927 Persistent Memory Region: Not Supported 00:24:37.927 Optional Asynchronous Events Supported 00:24:37.927 Namespace Attribute Notices: Not Supported 00:24:37.927 Firmware Activation Notices: Not Supported 00:24:37.927 ANA Change Notices: Not Supported 00:24:37.927 PLE Aggregate Log Change Notices: Not Supported 00:24:37.927 LBA Status Info Alert Notices: Not Supported 00:24:37.927 EGE Aggregate Log Change Notices: Not Supported 00:24:37.927 Normal NVM Subsystem Shutdown event: Not Supported 00:24:37.928 Zone Descriptor Change Notices: Not Supported 00:24:37.928 Discovery Log Change Notices: Supported 00:24:37.928 Controller Attributes 00:24:37.928 128-bit Host Identifier: Not Supported 00:24:37.928 Non-Operational Permissive Mode: Not Supported 00:24:37.928 NVM Sets: Not Supported 00:24:37.928 Read Recovery Levels: Not Supported 00:24:37.928 Endurance Groups: Not Supported 00:24:37.928 Predictable Latency Mode: Not Supported 00:24:37.928 Traffic Based Keep ALive: Not Supported 00:24:37.928 Namespace Granularity: Not Supported 00:24:37.928 SQ Associations: Not Supported 00:24:37.928 UUID List: Not Supported 00:24:37.928 Multi-Domain Subsystem: Not Supported 00:24:37.928 Fixed Capacity Management: Not Supported 00:24:37.928 Variable Capacity Management: Not Supported 00:24:37.928 Delete Endurance Group: Not Supported 00:24:37.928 Delete NVM Set: Not Supported 00:24:37.928 Extended LBA Formats Supported: Not Supported 00:24:37.928 Flexible Data Placement Supported: Not Supported 00:24:37.928 00:24:37.928 Controller Memory Buffer Support 00:24:37.928 ================================ 00:24:37.928 Supported: No 00:24:37.928 00:24:37.928 Persistent Memory Region Support 00:24:37.928 ================================ 00:24:37.928 Supported: No 00:24:37.928 00:24:37.928 Admin Command Set Attributes 00:24:37.928 ============================ 00:24:37.928 Security Send/Receive: Not Supported 00:24:37.928 Format NVM: Not Supported 00:24:37.928 Firmware Activate/Download: Not Supported 00:24:37.928 Namespace Management: Not Supported 00:24:37.928 Device Self-Test: Not Supported 00:24:37.928 Directives: Not Supported 00:24:37.928 NVMe-MI: Not Supported 00:24:37.928 Virtualization Management: Not Supported 00:24:37.928 Doorbell Buffer Config: Not Supported 00:24:37.928 Get LBA Status Capability: Not Supported 00:24:37.928 Command & Feature Lockdown Capability: Not Supported 00:24:37.928 Abort Command Limit: 1 00:24:37.928 Async Event Request Limit: 1 00:24:37.928 Number of Firmware Slots: N/A 00:24:37.928 Firmware Slot 1 Read-Only: N/A 00:24:37.928 Firmware Activation Without Reset: N/A 00:24:37.928 Multiple Update Detection Support: N/A 00:24:37.928 Firmware Update Granularity: No Information Provided 00:24:37.929 Per-Namespace SMART Log: No 00:24:37.929 Asymmetric Namespace Access Log Page: Not Supported 00:24:37.929 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:37.929 Command Effects Log Page: Not Supported 00:24:37.929 Get Log Page Extended Data: Supported 00:24:37.929 Telemetry Log Pages: Not Supported 00:24:37.929 Persistent Event Log Pages: Not Supported 00:24:37.929 Supported Log Pages Log Page: May Support 00:24:37.929 Commands Supported & Effects Log Page: Not Supported 00:24:37.929 Feature Identifiers & Effects Log Page:May Support 00:24:37.929 NVMe-MI Commands & Effects Log Page: May Support 00:24:37.929 Data Area 4 for Telemetry Log: Not Supported 00:24:37.929 Error Log Page Entries Supported: 1 00:24:37.929 Keep Alive: Not Supported 00:24:37.929 00:24:37.929 NVM Command Set Attributes 00:24:37.929 ========================== 00:24:37.929 Submission Queue Entry Size 00:24:37.929 Max: 1 00:24:37.929 Min: 1 00:24:37.929 Completion Queue Entry Size 00:24:37.929 Max: 1 00:24:37.929 Min: 1 00:24:37.929 Number of Namespaces: 0 00:24:37.929 Compare Command: Not Supported 00:24:37.929 Write Uncorrectable Command: Not Supported 00:24:37.929 Dataset Management Command: Not Supported 00:24:37.929 Write Zeroes Command: Not Supported 00:24:37.929 Set Features Save Field: Not Supported 00:24:37.929 Reservations: Not Supported 00:24:37.929 Timestamp: Not Supported 00:24:37.929 Copy: Not Supported 00:24:37.929 Volatile Write Cache: Not Present 00:24:37.929 Atomic Write Unit (Normal): 1 00:24:37.929 Atomic Write Unit (PFail): 1 00:24:37.929 Atomic Compare & Write Unit: 1 00:24:37.929 Fused Compare & Write: Not Supported 00:24:37.929 Scatter-Gather List 00:24:37.929 SGL Command Set: Supported 00:24:37.929 SGL Keyed: Not Supported 00:24:37.929 SGL Bit Bucket Descriptor: Not Supported 00:24:37.929 SGL Metadata Pointer: Not Supported 00:24:37.929 Oversized SGL: Not Supported 00:24:37.929 SGL Metadata Address: Not Supported 00:24:37.929 SGL Offset: Supported 00:24:37.929 Transport SGL Data Block: Not Supported 00:24:37.929 Replay Protected Memory Block: Not Supported 00:24:37.929 00:24:37.930 Firmware Slot Information 00:24:37.930 ========================= 00:24:37.930 Active slot: 0 00:24:37.930 00:24:37.930 00:24:37.930 Error Log 00:24:37.930 ========= 00:24:37.930 00:24:37.930 Active Namespaces 00:24:37.930 ================= 00:24:37.930 Discovery Log Page 00:24:37.930 ================== 00:24:37.930 Generation Counter: 2 00:24:37.930 Number of Records: 2 00:24:37.930 Record Format: 0 00:24:37.930 00:24:37.930 Discovery Log Entry 0 00:24:37.930 ---------------------- 00:24:37.930 Transport Type: 3 (TCP) 00:24:37.930 Address Family: 1 (IPv4) 00:24:37.930 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:37.930 Entry Flags: 00:24:37.930 Duplicate Returned Information: 0 00:24:37.930 Explicit Persistent Connection Support for Discovery: 0 00:24:37.930 Transport Requirements: 00:24:37.930 Secure Channel: Not Specified 00:24:37.930 Port ID: 1 (0x0001) 00:24:37.931 Controller ID: 65535 (0xffff) 00:24:37.931 Admin Max SQ Size: 32 00:24:37.931 Transport Service Identifier: 4420 00:24:37.931 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:37.931 Transport Address: 10.0.0.1 00:24:37.931 Discovery Log Entry 1 00:24:37.932 ---------------------- 00:24:37.932 Transport Type: 3 (TCP) 00:24:37.932 Address Family: 1 (IPv4) 00:24:37.932 Subsystem Type: 2 (NVM Subsystem) 00:24:37.932 Entry Flags: 00:24:37.932 Duplicate Returned Information: 0 00:24:37.932 Explicit Persistent Connection Support for Discovery: 0 00:24:37.932 Transport Requirements: 00:24:37.932 Secure Channel: Not Specified 00:24:37.932 Port ID: 1 (0x0001) 00:24:37.932 Controller ID: 65535 (0xffff) 00:24:37.932 Admin Max SQ Size: 32 00:24:37.932 Transport Service Identifier: 4420 00:24:37.932 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:37.932 Transport Address: 10.0.0.1 00:24:37.932 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.932 get_feature(0x01) failed 00:24:37.932 get_feature(0x02) failed 00:24:37.932 get_feature(0x04) failed 00:24:37.932 ===================================================== 00:24:37.932 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.932 ===================================================== 00:24:37.932 Controller Capabilities/Features 00:24:37.932 ================================ 00:24:37.932 Vendor ID: 0000 00:24:37.932 Subsystem Vendor ID: 0000 00:24:37.932 Serial Number: b5130d69af4a299a8619 00:24:37.932 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:37.932 Firmware Version: 6.8.9-20 00:24:37.932 Recommended Arb Burst: 6 00:24:37.932 IEEE OUI Identifier: 00 00 00 00:24:37.932 Multi-path I/O 00:24:37.932 May have multiple subsystem ports: Yes 00:24:37.932 May have multiple controllers: Yes 00:24:37.932 Associated with SR-IOV VF: No 00:24:37.933 Max Data Transfer Size: Unlimited 00:24:37.933 Max Number of Namespaces: 1024 00:24:37.933 Max Number of I/O Queues: 128 00:24:37.933 NVMe Specification Version (VS): 1.3 00:24:37.933 NVMe Specification Version (Identify): 1.3 00:24:37.933 Maximum Queue Entries: 1024 00:24:37.933 Contiguous Queues Required: No 00:24:37.933 Arbitration Mechanisms Supported 00:24:37.933 Weighted Round Robin: Not Supported 00:24:37.933 Vendor Specific: Not Supported 00:24:37.933 Reset Timeout: 7500 ms 00:24:37.933 Doorbell Stride: 4 bytes 00:24:37.933 NVM Subsystem Reset: Not Supported 00:24:37.933 Command Sets Supported 00:24:37.933 NVM Command Set: Supported 00:24:37.933 Boot Partition: Not Supported 00:24:37.933 Memory Page Size Minimum: 4096 bytes 00:24:37.933 Memory Page Size Maximum: 4096 bytes 00:24:37.933 Persistent Memory Region: Not Supported 00:24:37.933 Optional Asynchronous Events Supported 00:24:37.933 Namespace Attribute Notices: Supported 00:24:37.933 Firmware Activation Notices: Not Supported 00:24:37.933 ANA Change Notices: Supported 00:24:37.933 PLE Aggregate Log Change Notices: Not Supported 00:24:37.933 LBA Status Info Alert Notices: Not Supported 00:24:37.933 EGE Aggregate Log Change Notices: Not Supported 00:24:37.933 Normal NVM Subsystem Shutdown event: Not Supported 00:24:37.933 Zone Descriptor Change Notices: Not Supported 00:24:37.933 Discovery Log Change Notices: Not Supported 00:24:37.933 Controller Attributes 00:24:37.933 128-bit Host Identifier: Supported 00:24:37.933 Non-Operational Permissive Mode: Not Supported 00:24:37.933 NVM Sets: Not Supported 00:24:37.933 Read Recovery Levels: Not Supported 00:24:37.933 Endurance Groups: Not Supported 00:24:37.933 Predictable Latency Mode: Not Supported 00:24:37.933 Traffic Based Keep ALive: Supported 00:24:37.933 Namespace Granularity: Not Supported 00:24:37.933 SQ Associations: Not Supported 00:24:37.933 UUID List: Not Supported 00:24:37.933 Multi-Domain Subsystem: Not Supported 00:24:37.933 Fixed Capacity Management: Not Supported 00:24:37.933 Variable Capacity Management: Not Supported 00:24:37.934 Delete Endurance Group: Not Supported 00:24:37.934 Delete NVM Set: Not Supported 00:24:37.934 Extended LBA Formats Supported: Not Supported 00:24:37.934 Flexible Data Placement Supported: Not Supported 00:24:37.934 00:24:37.934 Controller Memory Buffer Support 00:24:37.934 ================================ 00:24:37.934 Supported: No 00:24:37.934 00:24:37.934 Persistent Memory Region Support 00:24:37.934 ================================ 00:24:37.934 Supported: No 00:24:37.934 00:24:37.934 Admin Command Set Attributes 00:24:37.934 ============================ 00:24:37.934 Security Send/Receive: Not Supported 00:24:37.934 Format NVM: Not Supported 00:24:37.934 Firmware Activate/Download: Not Supported 00:24:37.934 Namespace Management: Not Supported 00:24:37.934 Device Self-Test: Not Supported 00:24:37.934 Directives: Not Supported 00:24:37.934 NVMe-MI: Not Supported 00:24:37.934 Virtualization Management: Not Supported 00:24:37.934 Doorbell Buffer Config: Not Supported 00:24:37.934 Get LBA Status Capability: Not Supported 00:24:37.934 Command & Feature Lockdown Capability: Not Supported 00:24:37.934 Abort Command Limit: 4 00:24:37.934 Async Event Request Limit: 4 00:24:37.934 Number of Firmware Slots: N/A 00:24:37.934 Firmware Slot 1 Read-Only: N/A 00:24:37.934 Firmware Activation Without Reset: N/A 00:24:37.934 Multiple Update Detection Support: N/A 00:24:37.934 Firmware Update Granularity: No Information Provided 00:24:37.934 Per-Namespace SMART Log: Yes 00:24:37.934 Asymmetric Namespace Access Log Page: Supported 00:24:37.934 ANA Transition Time : 10 sec 00:24:37.934 00:24:37.934 Asymmetric Namespace Access Capabilities 00:24:37.935 ANA Optimized State : Supported 00:24:37.935 ANA Non-Optimized State : Supported 00:24:37.935 ANA Inaccessible State : Supported 00:24:37.935 ANA Persistent Loss State : Supported 00:24:37.935 ANA Change State : Supported 00:24:37.935 ANAGRPID is not changed : No 00:24:37.935 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:37.935 00:24:37.935 ANA Group Identifier Maximum : 128 00:24:37.935 Number of ANA Group Identifiers : 128 00:24:37.935 Max Number of Allowed Namespaces : 1024 00:24:37.935 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:37.935 Command Effects Log Page: Supported 00:24:37.935 Get Log Page Extended Data: Supported 00:24:37.935 Telemetry Log Pages: Not Supported 00:24:37.935 Persistent Event Log Pages: Not Supported 00:24:37.935 Supported Log Pages Log Page: May Support 00:24:37.935 Commands Supported & Effects Log Page: Not Supported 00:24:37.935 Feature Identifiers & Effects Log Page:May Support 00:24:37.935 NVMe-MI Commands & Effects Log Page: May Support 00:24:37.935 Data Area 4 for Telemetry Log: Not Supported 00:24:37.935 Error Log Page Entries Supported: 128 00:24:37.935 Keep Alive: Supported 00:24:37.935 Keep Alive Granularity: 1000 ms 00:24:37.935 00:24:37.935 NVM Command Set Attributes 00:24:37.935 ========================== 00:24:37.935 Submission Queue Entry Size 00:24:37.935 Max: 64 00:24:37.935 Min: 64 00:24:37.935 Completion Queue Entry Size 00:24:37.935 Max: 16 00:24:37.935 Min: 16 00:24:37.935 Number of Namespaces: 1024 00:24:37.935 Compare Command: Not Supported 00:24:37.935 Write Uncorrectable Command: Not Supported 00:24:37.935 Dataset Management Command: Supported 00:24:37.935 Write Zeroes Command: Supported 00:24:37.935 Set Features Save Field: Not Supported 00:24:37.935 Reservations: Not Supported 00:24:37.937 Timestamp: Not Supported 00:24:37.937 Copy: Not Supported 00:24:37.937 Volatile Write Cache: Present 00:24:37.937 Atomic Write Unit (Normal): 1 00:24:37.937 Atomic Write Unit (PFail): 1 00:24:37.938 Atomic Compare & Write Unit: 1 00:24:37.938 Fused Compare & Write: Not Supported 00:24:37.938 Scatter-Gather List 00:24:37.938 SGL Command Set: Supported 00:24:37.938 SGL Keyed: Not Supported 00:24:37.938 SGL Bit Bucket Descriptor: Not Supported 00:24:37.938 SGL Metadata Pointer: Not Supported 00:24:37.938 Oversized SGL: Not Supported 00:24:37.938 SGL Metadata Address: Not Supported 00:24:37.938 SGL Offset: Supported 00:24:37.938 Transport SGL Data Block: Not Supported 00:24:37.938 Replay Protected Memory Block: Not Supported 00:24:37.938 00:24:37.938 Firmware Slot Information 00:24:37.938 ========================= 00:24:37.938 Active slot: 0 00:24:37.938 00:24:37.938 Asymmetric Namespace Access 00:24:37.938 =========================== 00:24:37.938 Change Count : 0 00:24:37.938 Number of ANA Group Descriptors : 1 00:24:37.938 ANA Group Descriptor : 0 00:24:37.938 ANA Group ID : 1 00:24:37.938 Number of NSID Values : 1 00:24:37.938 Change Count : 0 00:24:37.938 ANA State : 1 00:24:37.938 Namespace Identifier : 1 00:24:37.938 00:24:37.938 Commands Supported and Effects 00:24:37.938 ============================== 00:24:37.938 Admin Commands 00:24:37.938 -------------- 00:24:37.938 Get Log Page (02h): Supported 00:24:37.938 Identify (06h): Supported 00:24:37.938 Abort (08h): Supported 00:24:37.938 Set Features (09h): Supported 00:24:37.938 Get Features (0Ah): Supported 00:24:37.939 Asynchronous Event Request (0Ch): Supported 00:24:37.939 Keep Alive (18h): Supported 00:24:37.939 I/O Commands 00:24:37.939 ------------ 00:24:37.939 Flush (00h): Supported 00:24:37.939 Write (01h): Supported LBA-Change 00:24:37.939 Read (02h): Supported 00:24:37.939 Write Zeroes (08h): Supported LBA-Change 00:24:37.939 Dataset Management (09h): Supported 00:24:37.939 00:24:37.939 Error Log 00:24:37.939 ========= 00:24:37.939 Entry: 0 00:24:37.939 Error Count: 0x3 00:24:37.939 Submission Queue Id: 0x0 00:24:37.939 Command Id: 0x5 00:24:37.939 Phase Bit: 0 00:24:37.939 Status Code: 0x2 00:24:37.939 Status Code Type: 0x0 00:24:37.939 Do Not Retry: 1 00:24:37.939 Error Location: 0x28 00:24:37.939 LBA: 0x0 00:24:37.939 Namespace: 0x0 00:24:37.939 Vendor Log Page: 0x0 00:24:37.939 ----------- 00:24:37.939 Entry: 1 00:24:37.939 Error Count: 0x2 00:24:37.939 Submission Queue Id: 0x0 00:24:37.939 Command Id: 0x5 00:24:37.939 Phase Bit: 0 00:24:37.939 Status Code: 0x2 00:24:37.939 Status Code Type: 0x0 00:24:37.939 Do Not Retry: 1 00:24:37.939 Error Location: 0x28 00:24:37.939 LBA: 0x0 00:24:37.939 Namespace: 0x0 00:24:37.939 Vendor Log Page: 0x0 00:24:37.939 ----------- 00:24:37.939 Entry: 2 00:24:37.939 Error Count: 0x1 00:24:37.939 Submission Queue Id: 0x0 00:24:37.939 Command Id: 0x4 00:24:37.939 Phase Bit: 0 00:24:37.939 Status Code: 0x2 00:24:37.939 Status Code Type: 0x0 00:24:37.940 Do Not Retry: 1 00:24:37.940 Error Location: 0x28 00:24:37.940 LBA: 0x0 00:24:37.940 Namespace: 0x0 00:24:37.940 Vendor Log Page: 0x0 00:24:37.940 00:24:37.940 Number of Queues 00:24:37.940 ================ 00:24:37.940 Number of I/O Submission Queues: 128 00:24:37.940 Number of I/O Completion Queues: 128 00:24:37.940 00:24:37.940 ZNS Specific Controller Data 00:24:37.940 ============================ 00:24:37.940 Zone Append Size Limit: 0 00:24:37.940 00:24:37.940 00:24:37.940 Active Namespaces 00:24:37.940 ================= 00:24:37.940 get_feature(0x05) failed 00:24:37.940 Namespace ID:1 00:24:37.940 Command Set Identifier: NVM (00h) 00:24:37.940 Deallocate: Supported 00:24:37.940 Deallocated/Unwritten Error: Not Supported 00:24:37.940 Deallocated Read Value: Unknown 00:24:37.940 Deallocate in Write Zeroes: Not Supported 00:24:37.940 Deallocated Guard Field: 0xFFFF 00:24:37.940 Flush: Supported 00:24:37.940 Reservation: Not Supported 00:24:37.940 Namespace Sharing Capabilities: Multiple Controllers 00:24:37.940 Size (in LBAs): 1953525168 (931GiB) 00:24:37.940 Capacity (in LBAs): 1953525168 (931GiB) 00:24:37.940 Utilization (in LBAs): 1953525168 (931GiB) 00:24:37.940 UUID: 41bf93a3-e85e-4eda-97e1-fe97ba4412b0 00:24:37.940 Thin Provisioning: Not Supported 00:24:37.940 Per-NS Atomic Units: Yes 00:24:37.940 Atomic Boundary Size (Normal): 0 00:24:37.940 Atomic Boundary Size (PFail): 0 00:24:37.940 Atomic Boundary Offset: 0 00:24:37.940 NGUID/EUI64 Never Reused: No 00:24:37.940 ANA group ID: 1 00:24:37.940 Namespace Write Protected: No 00:24:37.940 Number of LBA Formats: 1 00:24:37.940 Current LBA Format: LBA Format #00 00:24:37.940 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:37.940 00:24:37.940 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:37.940 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.940 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:37.940 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.941 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:37.941 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.941 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.941 rmmod nvme_tcp 00:24:37.941 rmmod nvme_fabrics 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.209 10:53:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:40.110 10:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:42.641 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:42.641 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:43.577 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:43.577 00:24:43.577 real 0m15.258s 00:24:43.577 user 0m3.677s 00:24:43.577 sys 0m7.896s 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:43.577 ************************************ 00:24:43.577 END TEST nvmf_identify_kernel_target 00:24:43.577 ************************************ 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.577 ************************************ 00:24:43.577 START TEST nvmf_auth_host 00:24:43.577 ************************************ 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:43.577 * Looking for test storage... 00:24:43.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:43.577 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:43.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.578 --rc genhtml_branch_coverage=1 00:24:43.578 --rc genhtml_function_coverage=1 00:24:43.578 --rc genhtml_legend=1 00:24:43.578 --rc geninfo_all_blocks=1 00:24:43.578 --rc geninfo_unexecuted_blocks=1 00:24:43.578 00:24:43.578 ' 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:43.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.578 --rc genhtml_branch_coverage=1 00:24:43.578 --rc genhtml_function_coverage=1 00:24:43.578 --rc genhtml_legend=1 00:24:43.578 --rc geninfo_all_blocks=1 00:24:43.578 --rc geninfo_unexecuted_blocks=1 00:24:43.578 00:24:43.578 ' 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:43.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.578 --rc genhtml_branch_coverage=1 00:24:43.578 --rc genhtml_function_coverage=1 00:24:43.578 --rc genhtml_legend=1 00:24:43.578 --rc geninfo_all_blocks=1 00:24:43.578 --rc geninfo_unexecuted_blocks=1 00:24:43.578 00:24:43.578 ' 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:43.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:43.578 --rc genhtml_branch_coverage=1 00:24:43.578 --rc genhtml_function_coverage=1 00:24:43.578 --rc genhtml_legend=1 00:24:43.578 --rc geninfo_all_blocks=1 00:24:43.578 --rc geninfo_unexecuted_blocks=1 00:24:43.578 00:24:43.578 ' 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.578 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.836 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:43.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:43.837 10:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:49.105 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:49.105 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:49.105 Found net devices under 0000:86:00.0: cvl_0_0 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:49.105 Found net devices under 0000:86:00.1: cvl_0_1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.105 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:24:49.106 00:24:49.106 --- 10.0.0.2 ping statistics --- 00:24:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.106 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:49.106 00:24:49.106 --- 10.0.0.1 ping statistics --- 00:24:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.106 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2813190 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2813190 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2813190 ']' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a13272bf63dd77f45cf685fd929a4486 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yn5 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a13272bf63dd77f45cf685fd929a4486 0 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a13272bf63dd77f45cf685fd929a4486 0 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a13272bf63dd77f45cf685fd929a4486 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:49.106 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yn5 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yn5 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yn5 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=41cfa41f28f6312240410504102458dbc817950a661577ad63c42bdd6e0b5a4b 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.0aa 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 41cfa41f28f6312240410504102458dbc817950a661577ad63c42bdd6e0b5a4b 3 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 41cfa41f28f6312240410504102458dbc817950a661577ad63c42bdd6e0b5a4b 3 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=41cfa41f28f6312240410504102458dbc817950a661577ad63c42bdd6e0b5a4b 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.0aa 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.0aa 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0aa 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0e355d639f4309f5f72964178a77af45bf299f6477069d4 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.n60 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0e355d639f4309f5f72964178a77af45bf299f6477069d4 0 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0e355d639f4309f5f72964178a77af45bf299f6477069d4 0 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0e355d639f4309f5f72964178a77af45bf299f6477069d4 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.n60 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.n60 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.n60 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a63855cec5f1b1603a5f7fee2045a4086cec4e89aab58b5b 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.lQE 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a63855cec5f1b1603a5f7fee2045a4086cec4e89aab58b5b 2 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a63855cec5f1b1603a5f7fee2045a4086cec4e89aab58b5b 2 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a63855cec5f1b1603a5f7fee2045a4086cec4e89aab58b5b 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.lQE 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.lQE 00:24:49.365 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lQE 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b0e57135d7911b8788f2d4a82cf99f6d 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GXc 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b0e57135d7911b8788f2d4a82cf99f6d 1 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b0e57135d7911b8788f2d4a82cf99f6d 1 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b0e57135d7911b8788f2d4a82cf99f6d 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:49.366 10:53:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GXc 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GXc 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GXc 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0b3dc8e4b09a3de0a3b2641d907e9e3b 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.psK 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0b3dc8e4b09a3de0a3b2641d907e9e3b 1 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0b3dc8e4b09a3de0a3b2641d907e9e3b 1 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0b3dc8e4b09a3de0a3b2641d907e9e3b 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:49.366 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.psK 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.psK 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.psK 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=33be3a65800a63d2c289171b4b981ac4f82df6b6b81afd66 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.918 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 33be3a65800a63d2c289171b4b981ac4f82df6b6b81afd66 2 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 33be3a65800a63d2c289171b4b981ac4f82df6b6b81afd66 2 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=33be3a65800a63d2c289171b4b981ac4f82df6b6b81afd66 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.918 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.918 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.918 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=22ecde60fa67f1d134c327a4f5302f0f 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sDr 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 22ecde60fa67f1d134c327a4f5302f0f 0 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 22ecde60fa67f1d134c327a4f5302f0f 0 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=22ecde60fa67f1d134c327a4f5302f0f 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sDr 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sDr 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sDr 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=78014da44e547ab2468330272017de78f1d125334bb00a2195c8dd13056bce74 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gZi 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 78014da44e547ab2468330272017de78f1d125334bb00a2195c8dd13056bce74 3 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 78014da44e547ab2468330272017de78f1d125334bb00a2195c8dd13056bce74 3 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=78014da44e547ab2468330272017de78f1d125334bb00a2195c8dd13056bce74 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gZi 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gZi 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.gZi 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2813190 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2813190 ']' 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.625 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yn5 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0aa ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0aa 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.n60 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lQE ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lQE 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GXc 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.psK ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.psK 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.918 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sDr ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sDr 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.gZi 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.884 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:49.885 10:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:53.166 Waiting for block devices as requested 00:24:53.166 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:53.166 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:53.166 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:53.424 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:53.424 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:53.424 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:53.424 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:53.682 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:53.682 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:53.682 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:53.682 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:54.248 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:54.248 No valid GPT data, bailing 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:54.506 10:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:54.506 00:24:54.506 Discovery Log Number of Records 2, Generation counter 2 00:24:54.506 =====Discovery Log Entry 0====== 00:24:54.506 trtype: tcp 00:24:54.506 adrfam: ipv4 00:24:54.506 subtype: current discovery subsystem 00:24:54.506 treq: not specified, sq flow control disable supported 00:24:54.506 portid: 1 00:24:54.506 trsvcid: 4420 00:24:54.506 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:54.506 traddr: 10.0.0.1 00:24:54.506 eflags: none 00:24:54.506 sectype: none 00:24:54.506 =====Discovery Log Entry 1====== 00:24:54.506 trtype: tcp 00:24:54.506 adrfam: ipv4 00:24:54.506 subtype: nvme subsystem 00:24:54.506 treq: not specified, sq flow control disable supported 00:24:54.506 portid: 1 00:24:54.506 trsvcid: 4420 00:24:54.506 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:54.506 traddr: 10.0.0.1 00:24:54.506 eflags: none 00:24:54.506 sectype: none 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.506 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 nvme0n1 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.765 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.023 nvme0n1 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.023 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.024 nvme0n1 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.024 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 nvme0n1 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.282 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.541 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.542 10:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 nvme0n1 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.542 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.801 nvme0n1 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.801 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.059 nvme0n1 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.059 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.060 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.318 nvme0n1 00:24:56.318 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.318 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.318 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.319 10:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.578 nvme0n1 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.578 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.837 nvme0n1 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.837 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.838 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.097 nvme0n1 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.097 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.357 nvme0n1 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.357 10:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.357 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.616 nvme0n1 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.616 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:57.875 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.876 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 nvme0n1 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.135 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.136 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.395 nvme0n1 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.395 10:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.395 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.396 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.655 nvme0n1 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.655 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.915 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.175 nvme0n1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.175 10:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.744 nvme0n1 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.744 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.745 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.004 nvme0n1 00:25:00.004 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.004 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.004 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.004 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.004 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.005 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.264 10:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.524 nvme0n1 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.524 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.525 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.093 nvme0n1 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.093 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.094 10:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.666 nvme0n1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.666 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.233 nvme0n1 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.233 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.492 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.493 10:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.059 nvme0n1 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.059 10:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 nvme0n1 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.626 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.193 nvme0n1 00:25:04.193 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.193 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.193 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.193 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.193 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.194 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.452 10:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.452 nvme0n1 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.452 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.710 nvme0n1 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.710 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.711 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.969 nvme0n1 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.969 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.276 nvme0n1 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.276 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 nvme0n1 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 10:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 nvme0n1 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.600 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.871 nvme0n1 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.871 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.872 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.133 nvme0n1 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.133 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.392 nvme0n1 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.392 10:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:06.392 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.650 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.651 nvme0n1 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.651 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.909 nvme0n1 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.909 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.426 nvme0n1 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.426 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.686 nvme0n1 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.686 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.944 nvme0n1 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:07.944 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.945 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 nvme0n1 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.203 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.462 10:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.721 nvme0n1 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.721 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.979 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.238 nvme0n1 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.238 10:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.804 nvme0n1 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.804 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.805 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.063 nvme0n1 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.063 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.321 10:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.579 nvme0n1 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.579 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:10.580 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:10.580 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:10.580 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.580 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.580 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.513 nvme0n1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.513 10:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 nvme0n1 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.080 10:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.646 nvme0n1 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.646 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.213 nvme0n1 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:13.213 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:13.471 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.472 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.472 10:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 nvme0n1 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 nvme0n1 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.038 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 nvme0n1 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.297 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.556 10:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 nvme0n1 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.556 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 nvme0n1 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.815 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.073 nvme0n1 00:25:15.073 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.073 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.073 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.074 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.332 nvme0n1 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.332 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.333 10:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.591 nvme0n1 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.591 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.850 nvme0n1 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.850 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.108 nvme0n1 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.108 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.109 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 nvme0n1 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.367 10:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 nvme0n1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.625 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.883 nvme0n1 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.883 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.884 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.884 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.884 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.142 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.401 nvme0n1 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.401 10:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 nvme0n1 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.660 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.918 nvme0n1 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.918 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.484 nvme0n1 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.484 10:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.484 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.742 nvme0n1 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.742 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.000 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.001 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 nvme0n1 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:19.259 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.260 10:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.826 nvme0n1 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.826 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.393 nvme0n1 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTEzMjcyYmY2M2RkNzdmNDVjZjY4NWZkOTI5YTQ0ODagknD8: 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDFjZmE0MWYyOGY2MzEyMjQwNDEwNTA0MTAyNDU4ZGJjODE3OTUwYTY2MTU3N2FkNjNjNDJiZGQ2ZTBiNWE0Yhx77p0=: 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.393 10:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 nvme0n1 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:20.959 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:20.960 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.960 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.960 10:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.524 nvme0n1 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.524 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.525 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.091 nvme0n1 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.091 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzNiZTNhNjU4MDBhNjNkMmMyODkxNzFiNGI5ODFhYzRmODJkZjZiNmI4MWFmZDY2bWIFig==: 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjJlY2RlNjBmYTY3ZjFkMTM0YzMyN2E0ZjUzMDJmMGb/CQe2: 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.349 10:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.913 nvme0n1 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.913 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwMTRkYTQ0ZTU0N2FiMjQ2ODMzMDI3MjAxN2RlNzhmMWQxMjUzMzRiYjAwYTIxOTVjOGRkMTMwNTZiY2U3NC52WIM=: 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.914 10:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.479 nvme0n1 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:23.479 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.480 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.738 request: 00:25:23.738 { 00:25:23.738 "name": "nvme0", 00:25:23.738 "trtype": "tcp", 00:25:23.738 "traddr": "10.0.0.1", 00:25:23.738 "adrfam": "ipv4", 00:25:23.738 "trsvcid": "4420", 00:25:23.738 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:23.738 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:23.738 "prchk_reftag": false, 00:25:23.738 "prchk_guard": false, 00:25:23.738 "hdgst": false, 00:25:23.738 "ddgst": false, 00:25:23.738 "allow_unrecognized_csi": false, 00:25:23.738 "method": "bdev_nvme_attach_controller", 00:25:23.738 "req_id": 1 00:25:23.738 } 00:25:23.738 Got JSON-RPC error response 00:25:23.738 response: 00:25:23.738 { 00:25:23.738 "code": -5, 00:25:23.738 "message": "Input/output error" 00:25:23.738 } 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.738 request: 00:25:23.738 { 00:25:23.738 "name": "nvme0", 00:25:23.738 "trtype": "tcp", 00:25:23.738 "traddr": "10.0.0.1", 00:25:23.738 "adrfam": "ipv4", 00:25:23.738 "trsvcid": "4420", 00:25:23.738 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:23.738 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:23.738 "prchk_reftag": false, 00:25:23.738 "prchk_guard": false, 00:25:23.738 "hdgst": false, 00:25:23.738 "ddgst": false, 00:25:23.738 "dhchap_key": "key2", 00:25:23.738 "allow_unrecognized_csi": false, 00:25:23.738 "method": "bdev_nvme_attach_controller", 00:25:23.738 "req_id": 1 00:25:23.738 } 00:25:23.738 Got JSON-RPC error response 00:25:23.738 response: 00:25:23.738 { 00:25:23.738 "code": -5, 00:25:23.738 "message": "Input/output error" 00:25:23.738 } 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.738 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.739 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.997 request: 00:25:23.997 { 00:25:23.997 "name": "nvme0", 00:25:23.997 "trtype": "tcp", 00:25:23.997 "traddr": "10.0.0.1", 00:25:23.997 "adrfam": "ipv4", 00:25:23.997 "trsvcid": "4420", 00:25:23.997 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:23.997 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:23.997 "prchk_reftag": false, 00:25:23.997 "prchk_guard": false, 00:25:23.997 "hdgst": false, 00:25:23.997 "ddgst": false, 00:25:23.997 "dhchap_key": "key1", 00:25:23.997 "dhchap_ctrlr_key": "ckey2", 00:25:23.997 "allow_unrecognized_csi": false, 00:25:23.997 "method": "bdev_nvme_attach_controller", 00:25:23.997 "req_id": 1 00:25:23.997 } 00:25:23.997 Got JSON-RPC error response 00:25:23.997 response: 00:25:23.997 { 00:25:23.997 "code": -5, 00:25:23.997 "message": "Input/output error" 00:25:23.997 } 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.997 nvme0n1 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.997 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.255 request: 00:25:24.255 { 00:25:24.255 "name": "nvme0", 00:25:24.255 "dhchap_key": "key1", 00:25:24.255 "dhchap_ctrlr_key": "ckey2", 00:25:24.255 "method": "bdev_nvme_set_keys", 00:25:24.255 "req_id": 1 00:25:24.255 } 00:25:24.255 Got JSON-RPC error response 00:25:24.255 response: 00:25:24.255 { 00:25:24.255 "code": -13, 00:25:24.255 "message": "Permission denied" 00:25:24.255 } 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:24.255 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:24.256 10:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:25.189 10:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.563 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjBlMzU1ZDYzOWY0MzA5ZjVmNzI5NjQxNzhhNzdhZjQ1YmYyOTlmNjQ3NzA2OWQ0+OaZTQ==: 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: ]] 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTYzODU1Y2VjNWYxYjE2MDNhNWY3ZmVlMjA0NWE0MDg2Y2VjNGU4OWFhYjU4YjViY4faig==: 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.564 10:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.564 nvme0n1 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjBlNTcxMzVkNzkxMWI4Nzg4ZjJkNGE4MmNmOTlmNmTsmLRh: 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: ]] 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MGIzZGM4ZTRiMDlhM2RlMGEzYjI2NDFkOTA3ZTllM2INgSky: 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.564 request: 00:25:26.564 { 00:25:26.564 "name": "nvme0", 00:25:26.564 "dhchap_key": "key2", 00:25:26.564 "dhchap_ctrlr_key": "ckey1", 00:25:26.564 "method": "bdev_nvme_set_keys", 00:25:26.564 "req_id": 1 00:25:26.564 } 00:25:26.564 Got JSON-RPC error response 00:25:26.564 response: 00:25:26.564 { 00:25:26.564 "code": -13, 00:25:26.564 "message": "Permission denied" 00:25:26.564 } 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:26.564 10:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:27.498 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.498 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:27.498 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.498 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.498 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.756 rmmod nvme_tcp 00:25:27.756 rmmod nvme_fabrics 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2813190 ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2813190 ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2813190' 00:25:27.756 killing process with pid 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2813190 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.756 10:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:30.287 10:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.188 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:32.188 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:32.447 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:33.383 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:33.383 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yn5 /tmp/spdk.key-null.n60 /tmp/spdk.key-sha256.GXc /tmp/spdk.key-sha384.918 /tmp/spdk.key-sha512.gZi /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:33.383 10:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:35.915 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:35.915 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:35.915 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:35.915 00:25:35.915 real 0m52.248s 00:25:35.915 user 0m47.892s 00:25:35.915 sys 0m11.417s 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.915 ************************************ 00:25:35.915 END TEST nvmf_auth_host 00:25:35.915 ************************************ 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:35.915 10:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.916 ************************************ 00:25:35.916 START TEST nvmf_digest 00:25:35.916 ************************************ 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:35.916 * Looking for test storage... 00:25:35.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:35.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.916 --rc genhtml_branch_coverage=1 00:25:35.916 --rc genhtml_function_coverage=1 00:25:35.916 --rc genhtml_legend=1 00:25:35.916 --rc geninfo_all_blocks=1 00:25:35.916 --rc geninfo_unexecuted_blocks=1 00:25:35.916 00:25:35.916 ' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:35.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.916 --rc genhtml_branch_coverage=1 00:25:35.916 --rc genhtml_function_coverage=1 00:25:35.916 --rc genhtml_legend=1 00:25:35.916 --rc geninfo_all_blocks=1 00:25:35.916 --rc geninfo_unexecuted_blocks=1 00:25:35.916 00:25:35.916 ' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:35.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.916 --rc genhtml_branch_coverage=1 00:25:35.916 --rc genhtml_function_coverage=1 00:25:35.916 --rc genhtml_legend=1 00:25:35.916 --rc geninfo_all_blocks=1 00:25:35.916 --rc geninfo_unexecuted_blocks=1 00:25:35.916 00:25:35.916 ' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:35.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.916 --rc genhtml_branch_coverage=1 00:25:35.916 --rc genhtml_function_coverage=1 00:25:35.916 --rc genhtml_legend=1 00:25:35.916 --rc geninfo_all_blocks=1 00:25:35.916 --rc geninfo_unexecuted_blocks=1 00:25:35.916 00:25:35.916 ' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.916 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:36.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:36.175 10:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:41.446 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:41.446 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:41.446 Found net devices under 0000:86:00.0: cvl_0_0 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:41.446 Found net devices under 0000:86:00.1: cvl_0_1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.446 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:25:41.446 00:25:41.447 --- 10.0.0.2 ping statistics --- 00:25:41.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.447 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:25:41.447 00:25:41.447 --- 10.0.0.1 ping statistics --- 00:25:41.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.447 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:41.447 10:54:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 ************************************ 00:25:41.447 START TEST nvmf_digest_clean 00:25:41.447 ************************************ 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2827335 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2827335 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2827335 ']' 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:41.447 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.447 [2024-11-07 10:54:09.084790] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:41.447 [2024-11-07 10:54:09.084832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.705 [2024-11-07 10:54:09.150639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.705 [2024-11-07 10:54:09.192253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.705 [2024-11-07 10:54:09.192286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.705 [2024-11-07 10:54:09.192294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.705 [2024-11-07 10:54:09.192300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.705 [2024-11-07 10:54:09.192306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.705 [2024-11-07 10:54:09.192895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.705 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.963 null0 00:25:41.963 [2024-11-07 10:54:09.377711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.963 [2024-11-07 10:54:09.401913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2827386 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2827386 /var/tmp/bperf.sock 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2827386 ']' 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:41.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:41.963 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:41.963 [2024-11-07 10:54:09.452469] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:41.964 [2024-11-07 10:54:09.452512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827386 ] 00:25:41.964 [2024-11-07 10:54:09.514466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.964 [2024-11-07 10:54:09.555335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.964 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:41.964 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:41.964 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:41.964 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:41.964 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:42.221 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.221 10:54:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:42.787 nvme0n1 00:25:42.787 10:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:42.787 10:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.787 Running I/O for 2 seconds... 00:25:44.654 24991.00 IOPS, 97.62 MiB/s [2024-11-07T09:54:12.325Z] 25198.00 IOPS, 98.43 MiB/s 00:25:44.654 Latency(us) 00:25:44.654 [2024-11-07T09:54:12.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.654 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:44.654 nvme0n1 : 2.00 25206.67 98.46 0.00 0.00 5072.61 2564.45 11568.53 00:25:44.654 [2024-11-07T09:54:12.325Z] =================================================================================================================== 00:25:44.654 [2024-11-07T09:54:12.325Z] Total : 25206.67 98.46 0.00 0.00 5072.61 2564.45 11568.53 00:25:44.654 { 00:25:44.654 "results": [ 00:25:44.654 { 00:25:44.654 "job": "nvme0n1", 00:25:44.654 "core_mask": "0x2", 00:25:44.654 "workload": "randread", 00:25:44.654 "status": "finished", 00:25:44.654 "queue_depth": 128, 00:25:44.654 "io_size": 4096, 00:25:44.654 "runtime": 2.00439, 00:25:44.654 "iops": 25206.67135637276, 00:25:44.654 "mibps": 98.4635599858311, 00:25:44.654 "io_failed": 0, 00:25:44.654 "io_timeout": 0, 00:25:44.654 "avg_latency_us": 5072.605690485451, 00:25:44.654 "min_latency_us": 2564.4521739130437, 00:25:44.654 "max_latency_us": 11568.528695652174 00:25:44.654 } 00:25:44.654 ], 00:25:44.654 "core_count": 1 00:25:44.654 } 00:25:44.654 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:44.654 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:44.654 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:44.654 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:44.654 | select(.opcode=="crc32c") 00:25:44.654 | "\(.module_name) \(.executed)"' 00:25:44.654 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2827386 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2827386 ']' 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2827386 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2827386 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2827386' 00:25:44.912 killing process with pid 2827386 00:25:44.912 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2827386 00:25:44.912 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.912 00:25:44.912 Latency(us) 00:25:44.912 [2024-11-07T09:54:12.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.912 [2024-11-07T09:54:12.584Z] =================================================================================================================== 00:25:44.913 [2024-11-07T09:54:12.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.913 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2827386 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2828066 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2828066 /var/tmp/bperf.sock 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2828066 ']' 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:45.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:45.171 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.171 [2024-11-07 10:54:12.738415] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:45.171 [2024-11-07 10:54:12.738473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828066 ] 00:25:45.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.171 Zero copy mechanism will not be used. 00:25:45.171 [2024-11-07 10:54:12.801534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.429 [2024-11-07 10:54:12.844438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.429 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:45.429 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:45.429 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:45.429 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:45.429 10:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:45.688 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.688 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.946 nvme0n1 00:25:45.946 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:45.946 10:54:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:46.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.204 Zero copy mechanism will not be used. 00:25:46.204 Running I/O for 2 seconds... 00:25:48.071 5589.00 IOPS, 698.62 MiB/s [2024-11-07T09:54:15.742Z] 5369.50 IOPS, 671.19 MiB/s 00:25:48.071 Latency(us) 00:25:48.071 [2024-11-07T09:54:15.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.071 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:48.072 nvme0n1 : 2.00 5368.36 671.05 0.00 0.00 2977.88 758.65 8947.09 00:25:48.072 [2024-11-07T09:54:15.743Z] =================================================================================================================== 00:25:48.072 [2024-11-07T09:54:15.743Z] Total : 5368.36 671.05 0.00 0.00 2977.88 758.65 8947.09 00:25:48.072 { 00:25:48.072 "results": [ 00:25:48.072 { 00:25:48.072 "job": "nvme0n1", 00:25:48.072 "core_mask": "0x2", 00:25:48.072 "workload": "randread", 00:25:48.072 "status": "finished", 00:25:48.072 "queue_depth": 16, 00:25:48.072 "io_size": 131072, 00:25:48.072 "runtime": 2.003404, 00:25:48.072 "iops": 5368.3630460955455, 00:25:48.072 "mibps": 671.0453807619432, 00:25:48.072 "io_failed": 0, 00:25:48.072 "io_timeout": 0, 00:25:48.072 "avg_latency_us": 2977.883982616781, 00:25:48.072 "min_latency_us": 758.6504347826087, 00:25:48.072 "max_latency_us": 8947.088695652174 00:25:48.072 } 00:25:48.072 ], 00:25:48.072 "core_count": 1 00:25:48.072 } 00:25:48.072 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:48.072 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:48.072 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:48.072 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:48.072 | select(.opcode=="crc32c") 00:25:48.072 | "\(.module_name) \(.executed)"' 00:25:48.072 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2828066 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2828066 ']' 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2828066 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2828066 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2828066' 00:25:48.330 killing process with pid 2828066 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2828066 00:25:48.330 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.330 00:25:48.330 Latency(us) 00:25:48.330 [2024-11-07T09:54:16.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.330 [2024-11-07T09:54:16.001Z] =================================================================================================================== 00:25:48.330 [2024-11-07T09:54:16.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.330 10:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2828066 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2828645 00:25:48.588 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2828645 /var/tmp/bperf.sock 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2828645 ']' 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:48.589 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:48.589 [2024-11-07 10:54:16.110980] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:48.589 [2024-11-07 10:54:16.111028] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2828645 ] 00:25:48.589 [2024-11-07 10:54:16.175050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.589 [2024-11-07 10:54:16.212328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.847 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.158 nvme0n1 00:25:49.158 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:49.158 10:54:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.446 Running I/O for 2 seconds... 00:25:51.336 26549.00 IOPS, 103.71 MiB/s [2024-11-07T09:54:19.007Z] 26610.50 IOPS, 103.95 MiB/s 00:25:51.336 Latency(us) 00:25:51.336 [2024-11-07T09:54:19.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.336 nvme0n1 : 2.01 26613.03 103.96 0.00 0.00 4800.46 3120.08 7693.36 00:25:51.336 [2024-11-07T09:54:19.007Z] =================================================================================================================== 00:25:51.336 [2024-11-07T09:54:19.007Z] Total : 26613.03 103.96 0.00 0.00 4800.46 3120.08 7693.36 00:25:51.336 { 00:25:51.336 "results": [ 00:25:51.336 { 00:25:51.336 "job": "nvme0n1", 00:25:51.336 "core_mask": "0x2", 00:25:51.336 "workload": "randwrite", 00:25:51.336 "status": "finished", 00:25:51.336 "queue_depth": 128, 00:25:51.336 "io_size": 4096, 00:25:51.336 "runtime": 2.005822, 00:25:51.336 "iops": 26613.02947120931, 00:25:51.336 "mibps": 103.95714637191136, 00:25:51.336 "io_failed": 0, 00:25:51.336 "io_timeout": 0, 00:25:51.336 "avg_latency_us": 4800.460369761917, 00:25:51.336 "min_latency_us": 3120.0834782608695, 00:25:51.336 "max_latency_us": 7693.356521739131 00:25:51.336 } 00:25:51.336 ], 00:25:51.336 "core_count": 1 00:25:51.336 } 00:25:51.336 10:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:51.336 10:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:51.336 10:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:51.336 10:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:51.336 | select(.opcode=="crc32c") 00:25:51.336 | "\(.module_name) \(.executed)"' 00:25:51.336 10:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:51.594 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:51.594 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:51.594 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:51.594 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2828645 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2828645 ']' 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2828645 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2828645 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2828645' 00:25:51.595 killing process with pid 2828645 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2828645 00:25:51.595 Received shutdown signal, test time was about 2.000000 seconds 00:25:51.595 00:25:51.595 Latency(us) 00:25:51.595 [2024-11-07T09:54:19.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.595 [2024-11-07T09:54:19.266Z] =================================================================================================================== 00:25:51.595 [2024-11-07T09:54:19.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.595 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2828645 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2829121 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2829121 /var/tmp/bperf.sock 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2829121 ']' 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:51.854 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.854 [2024-11-07 10:54:19.380083] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:51.854 [2024-11-07 10:54:19.380135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829121 ] 00:25:51.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:51.854 Zero copy mechanism will not be used. 00:25:51.854 [2024-11-07 10:54:19.441558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.854 [2024-11-07 10:54:19.481534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.115 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:52.115 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:25:52.115 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:52.115 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:52.115 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.376 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.376 10:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.634 nvme0n1 00:25:52.634 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.634 10:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.634 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.634 Zero copy mechanism will not be used. 00:25:52.634 Running I/O for 2 seconds... 00:25:54.949 5048.00 IOPS, 631.00 MiB/s [2024-11-07T09:54:22.620Z] 5237.00 IOPS, 654.62 MiB/s 00:25:54.949 Latency(us) 00:25:54.949 [2024-11-07T09:54:22.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.949 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:54.949 nvme0n1 : 2.00 5234.39 654.30 0.00 0.00 3051.53 2037.31 7465.41 00:25:54.949 [2024-11-07T09:54:22.620Z] =================================================================================================================== 00:25:54.949 [2024-11-07T09:54:22.620Z] Total : 5234.39 654.30 0.00 0.00 3051.53 2037.31 7465.41 00:25:54.949 { 00:25:54.949 "results": [ 00:25:54.949 { 00:25:54.949 "job": "nvme0n1", 00:25:54.949 "core_mask": "0x2", 00:25:54.949 "workload": "randwrite", 00:25:54.949 "status": "finished", 00:25:54.949 "queue_depth": 16, 00:25:54.949 "io_size": 131072, 00:25:54.949 "runtime": 2.004055, 00:25:54.949 "iops": 5234.387279790225, 00:25:54.949 "mibps": 654.2984099737781, 00:25:54.949 "io_failed": 0, 00:25:54.949 "io_timeout": 0, 00:25:54.949 "avg_latency_us": 3051.525602354209, 00:25:54.949 "min_latency_us": 2037.3147826086956, 00:25:54.949 "max_latency_us": 7465.405217391304 00:25:54.949 } 00:25:54.949 ], 00:25:54.949 "core_count": 1 00:25:54.949 } 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:54.949 | select(.opcode=="crc32c") 00:25:54.949 | "\(.module_name) \(.executed)"' 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.949 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2829121 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2829121 ']' 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2829121 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2829121 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2829121' 00:25:54.950 killing process with pid 2829121 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2829121 00:25:54.950 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.950 00:25:54.950 Latency(us) 00:25:54.950 [2024-11-07T09:54:22.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.950 [2024-11-07T09:54:22.621Z] =================================================================================================================== 00:25:54.950 [2024-11-07T09:54:22.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.950 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2829121 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2827335 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2827335 ']' 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2827335 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2827335 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2827335' 00:25:55.209 killing process with pid 2827335 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2827335 00:25:55.209 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2827335 00:25:55.469 00:25:55.469 real 0m13.910s 00:25:55.469 user 0m26.559s 00:25:55.469 sys 0m4.524s 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:55.469 ************************************ 00:25:55.469 END TEST nvmf_digest_clean 00:25:55.469 ************************************ 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:55.469 10:54:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:55.469 ************************************ 00:25:55.469 START TEST nvmf_digest_error 00:25:55.469 ************************************ 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2829834 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2829834 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2829834 ']' 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:55.469 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.469 [2024-11-07 10:54:23.079598] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:55.469 [2024-11-07 10:54:23.079641] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.728 [2024-11-07 10:54:23.146060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.728 [2024-11-07 10:54:23.186560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.728 [2024-11-07 10:54:23.186596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.728 [2024-11-07 10:54:23.186603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.728 [2024-11-07 10:54:23.186610] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.728 [2024-11-07 10:54:23.186615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.728 [2024-11-07 10:54:23.187145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.728 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.729 [2024-11-07 10:54:23.271635] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.729 null0 00:25:55.729 [2024-11-07 10:54:23.361436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.729 [2024-11-07 10:54:23.385660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2829860 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2829860 /var/tmp/bperf.sock 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2829860 ']' 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:55.729 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.988 [2024-11-07 10:54:23.440047] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:55.988 [2024-11-07 10:54:23.440090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2829860 ] 00:25:55.988 [2024-11-07 10:54:23.502314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.988 [2024-11-07 10:54:23.544741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.988 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.988 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:55.988 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.988 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.247 10:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.816 nvme0n1 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.816 10:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.816 Running I/O for 2 seconds... 00:25:56.816 [2024-11-07 10:54:24.391650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.391686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.391697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.400537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.400562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.400571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.413449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.413471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.413480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.426022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.426053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.438670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.438692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.438701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.449883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.449905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.449914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.458473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.458496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.458504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.816 [2024-11-07 10:54:24.471159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:56.816 [2024-11-07 10:54:24.471182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.816 [2024-11-07 10:54:24.471191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.076 [2024-11-07 10:54:24.484140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.076 [2024-11-07 10:54:24.484162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.076 [2024-11-07 10:54:24.484170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.076 [2024-11-07 10:54:24.496225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.076 [2024-11-07 10:54:24.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.076 [2024-11-07 10:54:24.496256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.076 [2024-11-07 10:54:24.504516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.076 [2024-11-07 10:54:24.504538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.076 [2024-11-07 10:54:24.504546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.516822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.516844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.516852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.527701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.527722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.527730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.535748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.535770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.535779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.548704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.548727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.548735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.556676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.556697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.556710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.567648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.567670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.567678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.578648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.578678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.587831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.587851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.587860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.596410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.596432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.596446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.605941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.605962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.605970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.615941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.615962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.615971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.625367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.625396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.634468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.634489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.634498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.643386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.643410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.643419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.653486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.653507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.653516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.664353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.664384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.673384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.673406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.673414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.682743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.682764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.682773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.692970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.692991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.692999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.701567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.701589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.701597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.711502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.711523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.711531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.722592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.722613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.077 [2024-11-07 10:54:24.722621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.077 [2024-11-07 10:54:24.730983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.077 [2024-11-07 10:54:24.731004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.078 [2024-11-07 10:54:24.731012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.078 [2024-11-07 10:54:24.741422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.078 [2024-11-07 10:54:24.741448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.078 [2024-11-07 10:54:24.741457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.751425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.751460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.760805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.760825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.760833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.769165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.769187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.769196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.780068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.780088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.780097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.788867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.788888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.788896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.801406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.801428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.801441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.812378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.812405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.812413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.821270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.821291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.821299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.832554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.832575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.840800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.840821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.840830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.853290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.853312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.853321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.861446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.861468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.861476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.873086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.873107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.873115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.886005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.886027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.886035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.895990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.896011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.896020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.905806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.905827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.905835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.915478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.915498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.915507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.925146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.925167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.925176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.934742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.934764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.934772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.942761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.942782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.942790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.953980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.954002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.954010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.963296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.963358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.338 [2024-11-07 10:54:24.963366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.338 [2024-11-07 10:54:24.974235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.338 [2024-11-07 10:54:24.974256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.339 [2024-11-07 10:54:24.974265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.339 [2024-11-07 10:54:24.983253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.339 [2024-11-07 10:54:24.983274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.339 [2024-11-07 10:54:24.983286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.339 [2024-11-07 10:54:24.994756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.339 [2024-11-07 10:54:24.994777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.339 [2024-11-07 10:54:24.994785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.598 [2024-11-07 10:54:25.006737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.598 [2024-11-07 10:54:25.006759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.598 [2024-11-07 10:54:25.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.598 [2024-11-07 10:54:25.020505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.598 [2024-11-07 10:54:25.020527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.598 [2024-11-07 10:54:25.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.598 [2024-11-07 10:54:25.032081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.598 [2024-11-07 10:54:25.032102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.598 [2024-11-07 10:54:25.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.598 [2024-11-07 10:54:25.040829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.598 [2024-11-07 10:54:25.040850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.598 [2024-11-07 10:54:25.040858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.598 [2024-11-07 10:54:25.050986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.598 [2024-11-07 10:54:25.051007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.598 [2024-11-07 10:54:25.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.060800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.060820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.060829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.069073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.069094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.069103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.081499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.081532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.089970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.089991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.089999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.100634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.100656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.100665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.111111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.111131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.111140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.120729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.120751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.120760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.128887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.128909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.128918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.139843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.139865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.139874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.151015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.151037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.151046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.159462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.159483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.159492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.172047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.172068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.172077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.183629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.183651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.183659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.193053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.193074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.193082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.204272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.204293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.204304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.214750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.214772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.214780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.222557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.222578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.222586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.234427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.234453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.234461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.246749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.246779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.599 [2024-11-07 10:54:25.257989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.599 [2024-11-07 10:54:25.258011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.599 [2024-11-07 10:54:25.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.266839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.266861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.266869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.279357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.279378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.279386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.291901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.291922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.291931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.303225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.303246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.303254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.316353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.316374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.316383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.325145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.325166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.325174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.336648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.336669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.336677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.348522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.348543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.348552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.360737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.360757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.360765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.369350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.369371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.369379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 24234.00 IOPS, 94.66 MiB/s [2024-11-07T09:54:25.530Z] [2024-11-07 10:54:25.382059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.382081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.382089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.394904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.394927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.394936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.403305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.403326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.403335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.415076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.415100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.415108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.426091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.426113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.426122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.435078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.435099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.435108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.446931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.446953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.446966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.859 [2024-11-07 10:54:25.458458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.859 [2024-11-07 10:54:25.458480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.859 [2024-11-07 10:54:25.458489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.466753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.466775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.466783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.476933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.476955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.476963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.486017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.486039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.486047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.496471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.496493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.496501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.506013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.506034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.506043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.515170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.515198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.860 [2024-11-07 10:54:25.523620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:57.860 [2024-11-07 10:54:25.523641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.860 [2024-11-07 10:54:25.523649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.534240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.534264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.545198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.545220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.545229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.554209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.554230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.554239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.566897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.566928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.578924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.578945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.578954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.586964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.586985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.586994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.597125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.597145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.597154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.606786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.606806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.606815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.615892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.615913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.119 [2024-11-07 10:54:25.615922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.119 [2024-11-07 10:54:25.625237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.119 [2024-11-07 10:54:25.625258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.625267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.634885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.634908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.634918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.644883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.644905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.644914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.654024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.654045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.654054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.663749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.663770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.663779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.673978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.673998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.674008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.682244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.682264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.682274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.691713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.691733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.691742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.701228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.701248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.701261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.710590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.710610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.710619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.721590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.721611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.721620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.732517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.732537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.732547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.740702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.750858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.750879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.750888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.760009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.760029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.760038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.769996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.770017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.770026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.120 [2024-11-07 10:54:25.779102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.120 [2024-11-07 10:54:25.779123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.120 [2024-11-07 10:54:25.779132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.380 [2024-11-07 10:54:25.790399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.380 [2024-11-07 10:54:25.790421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.380 [2024-11-07 10:54:25.790430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.380 [2024-11-07 10:54:25.801511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.380 [2024-11-07 10:54:25.801532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.380 [2024-11-07 10:54:25.801541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.380 [2024-11-07 10:54:25.809314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.380 [2024-11-07 10:54:25.809333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.380 [2024-11-07 10:54:25.809343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.380 [2024-11-07 10:54:25.820787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.380 [2024-11-07 10:54:25.820808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.380 [2024-11-07 10:54:25.820819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.380 [2024-11-07 10:54:25.830980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.380 [2024-11-07 10:54:25.830999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.831007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.839934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.839954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.839962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.851327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.851346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.851354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.860064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.860083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.860091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.869644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.869666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.869679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.881150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.881172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.881180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.889837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.889859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.889868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.900249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.900269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.900278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.911181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.911203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.911211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.919665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.919686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.919695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.932287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.932308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.944750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.944772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.944781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.953347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.953367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.953376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.964097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.964123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.964132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.973867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.973888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.973897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.984324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.984345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.984353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:25.993449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:25.993470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:25.993479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:26.004369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:26.004389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:26.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:26.016171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:26.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:26.016199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:26.026533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:26.026554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:26.026562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:26.034821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:26.034842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:26.034851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.381 [2024-11-07 10:54:26.045920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.381 [2024-11-07 10:54:26.045942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.381 [2024-11-07 10:54:26.045950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.640 [2024-11-07 10:54:26.056638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.640 [2024-11-07 10:54:26.056659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.640 [2024-11-07 10:54:26.056668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.640 [2024-11-07 10:54:26.068864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.068885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.068894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.077280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.077300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.077308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.088968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.088989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.088998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.101380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.101402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.101410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.110946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.110966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.110974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.119850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.119871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.119880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.129686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.129720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.129728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.139033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.139053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.139067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.148657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.148678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.148686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.159380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.159402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.159410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.169311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.169331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.169340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.178898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.178920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.178928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.190299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.190321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.190330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.200851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.200872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.200881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.209426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.209452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.209461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.220285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.220307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.220315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.230926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.230947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.230956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.239414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.239451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.251909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.641 [2024-11-07 10:54:26.251931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.641 [2024-11-07 10:54:26.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.641 [2024-11-07 10:54:26.263485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.661 [2024-11-07 10:54:26.263507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.661 [2024-11-07 10:54:26.263516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.661 [2024-11-07 10:54:26.276562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.661 [2024-11-07 10:54:26.276583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.661 [2024-11-07 10:54:26.276592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.661 [2024-11-07 10:54:26.285709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.661 [2024-11-07 10:54:26.285731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.661 [2024-11-07 10:54:26.285739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.661 [2024-11-07 10:54:26.297221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.661 [2024-11-07 10:54:26.297243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.661 [2024-11-07 10:54:26.297251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.308539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.308571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.316913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.316934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.316946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.328292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.328313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.341044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.341066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.341075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.352673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.352695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.352703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.361073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.361093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.361101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 [2024-11-07 10:54:26.373374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x20bc390) 00:25:58.920 [2024-11-07 10:54:26.373396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.920 [2024-11-07 10:54:26.373405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.920 24621.50 IOPS, 96.18 MiB/s 00:25:58.921 Latency(us) 00:25:58.921 [2024-11-07T09:54:26.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.921 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:58.921 nvme0n1 : 2.01 24628.52 96.21 0.00 0.00 5191.77 2607.19 18236.10 00:25:58.921 [2024-11-07T09:54:26.592Z] =================================================================================================================== 00:25:58.921 [2024-11-07T09:54:26.592Z] Total : 24628.52 96.21 0.00 0.00 5191.77 2607.19 18236.10 00:25:58.921 { 00:25:58.921 "results": [ 00:25:58.921 { 00:25:58.921 "job": "nvme0n1", 00:25:58.921 "core_mask": "0x2", 00:25:58.921 "workload": "randread", 00:25:58.921 "status": "finished", 00:25:58.921 "queue_depth": 128, 00:25:58.921 "io_size": 4096, 00:25:58.921 "runtime": 2.006414, 00:25:58.921 "iops": 24628.516348071735, 00:25:58.921 "mibps": 96.20514198465521, 00:25:58.921 "io_failed": 0, 00:25:58.921 "io_timeout": 0, 00:25:58.921 "avg_latency_us": 5191.77230501212, 00:25:58.921 "min_latency_us": 2607.1930434782607, 00:25:58.921 "max_latency_us": 18236.104347826087 00:25:58.921 } 00:25:58.921 ], 00:25:58.921 "core_count": 1 00:25:58.921 } 00:25:58.921 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:58.921 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:58.921 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:58.921 | .driver_specific 00:25:58.921 | .nvme_error 00:25:58.921 | .status_code 00:25:58.921 | .command_transient_transport_error' 00:25:58.921 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2829860 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2829860 ']' 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2829860 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2829860 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2829860' 00:25:59.180 killing process with pid 2829860 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2829860 00:25:59.180 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.180 00:25:59.180 Latency(us) 00:25:59.180 [2024-11-07T09:54:26.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.180 [2024-11-07T09:54:26.851Z] =================================================================================================================== 00:25:59.180 [2024-11-07T09:54:26.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2829860 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2830339 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2830339 /var/tmp/bperf.sock 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2830339 ']' 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:59.180 10:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.439 [2024-11-07 10:54:26.856380] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:25:59.439 [2024-11-07 10:54:26.856458] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830339 ] 00:25:59.439 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.439 Zero copy mechanism will not be used. 00:25:59.439 [2024-11-07 10:54:26.919132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.439 [2024-11-07 10:54:26.961343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.439 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:59.439 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:25:59.439 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.439 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.698 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.698 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.698 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.698 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.699 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.699 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.957 nvme0n1 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:59.957 10:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.216 Zero copy mechanism will not be used. 00:26:00.216 Running I/O for 2 seconds... 00:26:00.216 [2024-11-07 10:54:27.643677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.643712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.643724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.649763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.649789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.649799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.655704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.655732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.655741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.661755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.661776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.661785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.667519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.667542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.667550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.673431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.673458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.673466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.680143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.680163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.680172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.685605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.685626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.685634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.691899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.691920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.691929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.697904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.697925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.216 [2024-11-07 10:54:27.697934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.216 [2024-11-07 10:54:27.703930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.216 [2024-11-07 10:54:27.703952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.703964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.710175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.710196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.710205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.716628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.716649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.716657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.722236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.722258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.722266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.727527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.727549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.727557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.732727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.732748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.732757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.739120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.739141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.744083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.744105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.744113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.749903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.749925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.749933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.756605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.756633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.756642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.764857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.764881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.772803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.772826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.772835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.780604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.780628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.789308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.789332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.789340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.797532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.797555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.797564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.805730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.805753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.805762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.813976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.813999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.814007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.821970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.821993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.822002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.829641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.829664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.829673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.838021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.838044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.838053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.846093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.846116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.846126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.853107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.853130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.853139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.860953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.860977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.860986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.868353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.868375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.868384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.874402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.874423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.874432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.217 [2024-11-07 10:54:27.880365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.217 [2024-11-07 10:54:27.880388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.217 [2024-11-07 10:54:27.880397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.886420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.886452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.886469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.892300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.892323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.892331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.897552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.897575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.897584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.903318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.903341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.903351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.909085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.909108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.909116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.914913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.914936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.914944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.921017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.921040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.921048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.926834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.926856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.926865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.931545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.931567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.931575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.939275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.939303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.939312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.947601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.947625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.947634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.954809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.954833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.954842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.962284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.962306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.962315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.970396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.970420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.970429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.978984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.979007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.979016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.986357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.986381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.986390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:27.993925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:27.993947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:27.993956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.001966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.001990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.001999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.010458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.010481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.010489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.018350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.018382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.025690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.025713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.025722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.029093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.029115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.029124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.034939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.034961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.034969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.040896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.040917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.040925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.046993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.047015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.047024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.052828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.052851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.052859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.058587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.477 [2024-11-07 10:54:28.058609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.477 [2024-11-07 10:54:28.058621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.477 [2024-11-07 10:54:28.064682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.064705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.064714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.070376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.070399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.070408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.076280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.076303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.076312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.081916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.081947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.087449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.087471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.087480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.093150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.093172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.093181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.099123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.099145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.099154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.104751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.104772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.104781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.110593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.110614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.110622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.116733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.116754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.116762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.122609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.122631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.122639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.128424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.128452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.128460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.134342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.134363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.134371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.478 [2024-11-07 10:54:28.140039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.478 [2024-11-07 10:54:28.140062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.478 [2024-11-07 10:54:28.140070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.145551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.145582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.151503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.151525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.151533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.156951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.156975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.156988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.162449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.162471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.162479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.168580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.168602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.168611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.174430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.174459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.174467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.180383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.180404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.180414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.185809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.185831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.185840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.191906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.191937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.197780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.197801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.197809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.203599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.203621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.737 [2024-11-07 10:54:28.203629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.737 [2024-11-07 10:54:28.209406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.737 [2024-11-07 10:54:28.209439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.209448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.215056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.215078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.215086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.221061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.221082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.221090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.224107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.224129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.224138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.229454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.229476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.229484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.235636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.235658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.235667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.241349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.241371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.241380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.247043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.247066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.247075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.252789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.252813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.258490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.258512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.263602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.263624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.263632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.268807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.268830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.268838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.274170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.274192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.274201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.279398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.279420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.279429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.284856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.284879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.284888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.290268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.290291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.290300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.295638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.295661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.295669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.301161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.301183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.301196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.306575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.306598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.306606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.312000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.312022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.312030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.317552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.317574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.317582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.323130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.323151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.323160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.328782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.328805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.328814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.334448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.334472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.334480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.340246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.340268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.340277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.345913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.345934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.351650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.351676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.351685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.356616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.356638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.356647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.362155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.362185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.367565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.367588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.367597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.373532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.373554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.373563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.379564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.379586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.379594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.385184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.385207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.385215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.391004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.391026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.391035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.396945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.396968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.396981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.738 [2024-11-07 10:54:28.402773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.738 [2024-11-07 10:54:28.402797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.738 [2024-11-07 10:54:28.402806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.408535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.408558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.408570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.414283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.414306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.414314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.419949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.419971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.419979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.425335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.425356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.425364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.430750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.430772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.430781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.436223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.998 [2024-11-07 10:54:28.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.998 [2024-11-07 10:54:28.436254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.998 [2024-11-07 10:54:28.441633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.441653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.441661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.447117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.447143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.447152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.452836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.452858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.452866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.458782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.458804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.458813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.464514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.464535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.464544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.469927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.469949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.469957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.475575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.475597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.475605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.481505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.481539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.481548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.486970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.486992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.487001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.492402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.492424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.492439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.497860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.497882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.497890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.503494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.503515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.503524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.509221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.509243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.509251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.515012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.515034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.515043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.520598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.520621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.520630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.525679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.525701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.525710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.531356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.531378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.531387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.536995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.537018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.537026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.542616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.542638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.542650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.548081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.548103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.548112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.553519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.553541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.553550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.558958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.558979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.558988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.564515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.564537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.564546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.570233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.570254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.570263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.575778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.575801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.575809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.581374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.581397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.581405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.999 [2024-11-07 10:54:28.586974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:00.999 [2024-11-07 10:54:28.586996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.999 [2024-11-07 10:54:28.587004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.592480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.592505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.592514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.598352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.598375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.598383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.603954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.603976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.603985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.610423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.610454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.610462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.616146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.616168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.616177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.621785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.621807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.621816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.627478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.627500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.627509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.633201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.633223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.633232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.000 5137.00 IOPS, 642.12 MiB/s [2024-11-07T09:54:28.671Z] [2024-11-07 10:54:28.639604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.639627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.639636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.645404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.645427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.645441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.652211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.652244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.000 [2024-11-07 10:54:28.660228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.000 [2024-11-07 10:54:28.660252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.000 [2024-11-07 10:54:28.660261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.667342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.667366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.667375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.673663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.673686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.673695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.679561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.679586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.679595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.685854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.685877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.685886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.691583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.691606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.691615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.696924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.696953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.696962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.702391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.702415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.702424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.708054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.708076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.708085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.714271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.714294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.714303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.720245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.720275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.725855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.725877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.725885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.729550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.729572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.729581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.734341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.734363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.734371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.739993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.740015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.740024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.745599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.745620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.745629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.751212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.751235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.751243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.260 [2024-11-07 10:54:28.757021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.260 [2024-11-07 10:54:28.757043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.260 [2024-11-07 10:54:28.757051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.762757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.762779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.762788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.768374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.768398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.768406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.774021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.774043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.779882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.779905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.779914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.785925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.785948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.791872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.791895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.791907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.797763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.797786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.797794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.803635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.803657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.803665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.809387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.809409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.809418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.815137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.815159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.815167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.820975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.820997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.821006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.826899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.826921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.826929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.832535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.832556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.832565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.838289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.838311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.838319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.844191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.844217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.844225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.849915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.849937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.849946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.855596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.855617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.855626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.861339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.861361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.861370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.866939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.866961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.866969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.872466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.872488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.872496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.877969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.877990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.877998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.883557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.883578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.883586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.889150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.889171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.889180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.894760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.894781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.261 [2024-11-07 10:54:28.894790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.261 [2024-11-07 10:54:28.900377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.261 [2024-11-07 10:54:28.900401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-07 10:54:28.900409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.262 [2024-11-07 10:54:28.905903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.262 [2024-11-07 10:54:28.905924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-07 10:54:28.905933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.262 [2024-11-07 10:54:28.911526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.262 [2024-11-07 10:54:28.911548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-07 10:54:28.911557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.262 [2024-11-07 10:54:28.917180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.262 [2024-11-07 10:54:28.917203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-07 10:54:28.917212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.262 [2024-11-07 10:54:28.923182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.262 [2024-11-07 10:54:28.923205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.262 [2024-11-07 10:54:28.923214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.521 [2024-11-07 10:54:28.929024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.521 [2024-11-07 10:54:28.929047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.521 [2024-11-07 10:54:28.929055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.521 [2024-11-07 10:54:28.934708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.521 [2024-11-07 10:54:28.934731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.521 [2024-11-07 10:54:28.934741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.521 [2024-11-07 10:54:28.940483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.521 [2024-11-07 10:54:28.940505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.521 [2024-11-07 10:54:28.940518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.521 [2024-11-07 10:54:28.945991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.521 [2024-11-07 10:54:28.946013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.946021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.952078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.952100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.952109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.958475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.958498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.958507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.965369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.965393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.965402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.973173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.973197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.973206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.981306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.981329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.981338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.988156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.988179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.988188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.991535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.991556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.991564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:28.997287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:28.997308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:28.997316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.003094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.003116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.003125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.008555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.008576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.008585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.013894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.013915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.013924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.019181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.019202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.019210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.024458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.024479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.024487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.029793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.029814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.029823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.035305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.035327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.035335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.040795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.040817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.040830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.046352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.046382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.051427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.051456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.051465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.056632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.056654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.056662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.061863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.061885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.061893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.067024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.067045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.067054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.072263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.072285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.072293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.077584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.077606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.522 [2024-11-07 10:54:29.083065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.522 [2024-11-07 10:54:29.083087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.522 [2024-11-07 10:54:29.083095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.088577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.088603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.094117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.094140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.094148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.099256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.099279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.099287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.104632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.104654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.110003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.110026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.110034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.115319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.115340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.115348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.120631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.120653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.120661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.125962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.125984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.125992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.130974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.130997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.131005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.136217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.136239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.136247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.141540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.141563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.141572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.146809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.146831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.146839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.152099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.152120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.152129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.157356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.157377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.157386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.162683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.162713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.168133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.168155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.168164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.173631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.173653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.173662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.178981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.179003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.179015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.523 [2024-11-07 10:54:29.184425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.523 [2024-11-07 10:54:29.184455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.523 [2024-11-07 10:54:29.184464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.783 [2024-11-07 10:54:29.190042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.783 [2024-11-07 10:54:29.190065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.783 [2024-11-07 10:54:29.190073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.783 [2024-11-07 10:54:29.195585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.783 [2024-11-07 10:54:29.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.783 [2024-11-07 10:54:29.195615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.783 [2024-11-07 10:54:29.201066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.783 [2024-11-07 10:54:29.201088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.783 [2024-11-07 10:54:29.201097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.783 [2024-11-07 10:54:29.206607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.783 [2024-11-07 10:54:29.206629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.783 [2024-11-07 10:54:29.206637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.783 [2024-11-07 10:54:29.212130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.212160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.217649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.217671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.217679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.223167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.223189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.223198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.228700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.228727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.228735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.234230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.234252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.234261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.239111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.239132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.239140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.243062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.243083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.243092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.247811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.247833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.247841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.253114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.253135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.253144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.258422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.258456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.264390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.264410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.264418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.269117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.269138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.269146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.274400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.274421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.274429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.279635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.279664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.284901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.284922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.284930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.290174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.290195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.290203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.295572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.295593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.295602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.301465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.301489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.301499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.306778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.306800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.306808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.312106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.312130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.312139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.317469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.317495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.317504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.322799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.322822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.322832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.328349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.328370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.328378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.334005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.784 [2024-11-07 10:54:29.334027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.784 [2024-11-07 10:54:29.334036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.784 [2024-11-07 10:54:29.339793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.339814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.339825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.345453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.345475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.345483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.350952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.350974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.356422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.356450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.356458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.361955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.361978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.361987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.367482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.367503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.367512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.373039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.373060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.373068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.378443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.378465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.378473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.383780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.383802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.383811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.389309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.389331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.389339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.394833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.394856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.394864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.400012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.400035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.400043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.405467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.405488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.405497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.410855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.410878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.410892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.416245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.416268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.416277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.421692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.421715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.421724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.427224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.427247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.427256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.432742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.432764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.432773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.438305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.438336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.443799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.785 [2024-11-07 10:54:29.449168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:01.785 [2024-11-07 10:54:29.449191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:01.785 [2024-11-07 10:54:29.449200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.454559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.454582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.454591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.459941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.459967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.459985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.464979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.465003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.465012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.470215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.470237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.470246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.475466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.475488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.475497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.045 [2024-11-07 10:54:29.480628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.045 [2024-11-07 10:54:29.480650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.045 [2024-11-07 10:54:29.480658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.485863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.485885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.485895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.491043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.491064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.491074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.496255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.496278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.496287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.501416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.501447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.501456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.506626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.506647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.506657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.511790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.511821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.517029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.517051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.517060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.522349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.522371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.522380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.527751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.527774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.527782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.533250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.533273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.533281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.538712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.538733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.538741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.544175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.544198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.544206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.549657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.549680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.549692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.555172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.555194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.555204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.560529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.560551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.560560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.565890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.565913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.565921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.571195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.571216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.571225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.576604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.576626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.576635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.580284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.580305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.580313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.584842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.584863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.584870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.590207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.590228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.590238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.595588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.595610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.595619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.600585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.600606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.600615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.605863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.605886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.046 [2024-11-07 10:54:29.605894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.046 [2024-11-07 10:54:29.611117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.046 [2024-11-07 10:54:29.611139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.611150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.047 [2024-11-07 10:54:29.616291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.047 [2024-11-07 10:54:29.616314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.616322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.047 [2024-11-07 10:54:29.621464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.047 [2024-11-07 10:54:29.621485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.621495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:02.047 [2024-11-07 10:54:29.626705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.047 [2024-11-07 10:54:29.626726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.626734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:02.047 [2024-11-07 10:54:29.632087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.047 [2024-11-07 10:54:29.632109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.632117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.047 [2024-11-07 10:54:29.637471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2074650) 00:26:02.047 [2024-11-07 10:54:29.637493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.047 [2024-11-07 10:54:29.637505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:02.047 5382.00 IOPS, 672.75 MiB/s 00:26:02.047 Latency(us) 00:26:02.047 [2024-11-07T09:54:29.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.047 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:02.047 nvme0n1 : 2.00 5382.02 672.75 0.00 0.00 2970.31 669.61 8833.11 00:26:02.047 [2024-11-07T09:54:29.718Z] =================================================================================================================== 00:26:02.047 [2024-11-07T09:54:29.718Z] Total : 5382.02 672.75 0.00 0.00 2970.31 669.61 8833.11 00:26:02.047 { 00:26:02.047 "results": [ 00:26:02.047 { 00:26:02.047 "job": "nvme0n1", 00:26:02.047 "core_mask": "0x2", 00:26:02.047 "workload": "randread", 00:26:02.047 "status": "finished", 00:26:02.047 "queue_depth": 16, 00:26:02.047 "io_size": 131072, 00:26:02.047 "runtime": 2.002965, 00:26:02.047 "iops": 5382.021153639729, 00:26:02.047 "mibps": 672.7526442049661, 00:26:02.047 "io_failed": 0, 00:26:02.047 "io_timeout": 0, 00:26:02.047 "avg_latency_us": 2970.3077221908525, 00:26:02.047 "min_latency_us": 669.6069565217391, 00:26:02.047 "max_latency_us": 8833.11304347826 00:26:02.047 } 00:26:02.047 ], 00:26:02.047 "core_count": 1 00:26:02.047 } 00:26:02.047 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:02.047 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:02.047 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:02.047 | .driver_specific 00:26:02.047 | .nvme_error 00:26:02.047 | .status_code 00:26:02.047 | .command_transient_transport_error' 00:26:02.047 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 347 > 0 )) 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2830339 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2830339 ']' 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2830339 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2830339 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2830339' 00:26:02.305 killing process with pid 2830339 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2830339 00:26:02.305 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.305 00:26:02.305 Latency(us) 00:26:02.305 [2024-11-07T09:54:29.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.305 [2024-11-07T09:54:29.976Z] =================================================================================================================== 00:26:02.305 [2024-11-07T09:54:29.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.305 10:54:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2830339 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2831006 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2831006 /var/tmp/bperf.sock 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2831006 ']' 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:02.563 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.563 [2024-11-07 10:54:30.128370] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:02.563 [2024-11-07 10:54:30.128423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831006 ] 00:26:02.563 [2024-11-07 10:54:30.192598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.822 [2024-11-07 10:54:30.236045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.822 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:02.822 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:02.822 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:02.822 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.080 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.339 nvme0n1 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.339 10:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:03.339 Running I/O for 2 seconds... 00:26:03.339 [2024-11-07 10:54:30.916977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.917154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.917182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.926690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.926856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.926880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.936473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.936638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.936657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.946271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.946441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.946460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.956035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.956197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.956216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.965971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.966135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.966154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.975698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.975860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.975878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.985405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.985574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.985597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.339 [2024-11-07 10:54:30.995428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.339 [2024-11-07 10:54:30.995593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.339 [2024-11-07 10:54:30.995611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.340 [2024-11-07 10:54:31.005253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.340 [2024-11-07 10:54:31.005416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.340 [2024-11-07 10:54:31.005441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.598 [2024-11-07 10:54:31.015193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.598 [2024-11-07 10:54:31.015353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.598 [2024-11-07 10:54:31.015373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.598 [2024-11-07 10:54:31.024909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.598 [2024-11-07 10:54:31.025068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.598 [2024-11-07 10:54:31.025086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.598 [2024-11-07 10:54:31.034609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.598 [2024-11-07 10:54:31.034772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.598 [2024-11-07 10:54:31.034790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.598 [2024-11-07 10:54:31.044314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.598 [2024-11-07 10:54:31.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.598 [2024-11-07 10:54:31.044499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.598 [2024-11-07 10:54:31.054006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.054167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.054185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.063666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.063825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.063843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.073379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.073549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.073570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.083228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.083388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.083406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.092922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.093082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.093100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.102648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.102807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.102825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.112322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.112489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.112506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.122038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.122200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.131748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.131910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.131928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.141377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.141545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.141563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.151114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.151276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.151295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.160829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.160998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.161015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.170455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.170617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.170635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.180468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.180633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.180651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.190326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.190493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.190511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.200044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.200210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.200228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.209750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.209908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.219449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.219612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.219630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.229167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.229326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.238868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.239030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.239048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.248561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.248723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.248741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.599 [2024-11-07 10:54:31.258249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.599 [2024-11-07 10:54:31.258411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.599 [2024-11-07 10:54:31.258429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.268168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.268342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.268361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.277902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.278063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.278082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.287612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.287773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.287791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.297315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.297484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.297502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.307040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.307219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.316750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.316911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.316928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.326452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.326615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.326636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.336178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.336338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.336357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.345860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.346020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.346038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.355593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.355767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.355784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.365271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.365432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.365455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.375280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.375457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.375475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.385272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.385441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.385458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.395418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.395595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.395614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.406193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.406368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.406386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.416850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.417024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.417043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.427091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.427258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.427277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.437399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.437578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.437597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.447746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.447924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.447943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.458480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.859 [2024-11-07 10:54:31.458647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.859 [2024-11-07 10:54:31.458666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.859 [2024-11-07 10:54:31.468771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.468938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.468956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.860 [2024-11-07 10:54:31.479052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.479241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.479261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.860 [2024-11-07 10:54:31.489875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.490042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.490060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.860 [2024-11-07 10:54:31.500171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.500340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.500359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.860 [2024-11-07 10:54:31.510440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.510610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.510628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.860 [2024-11-07 10:54:31.520691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:03.860 [2024-11-07 10:54:31.520855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:03.860 [2024-11-07 10:54:31.520872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.530794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.530956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.530977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.540760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.540925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.540944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.550711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.550871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.550889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.560412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.560577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.560595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.570136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.570294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.570313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.579860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.580021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.580039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.589562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.589723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.589741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.599272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.599431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.599453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.609050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.609209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.618797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.618969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.618988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.628751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.628910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.628928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.638446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.638605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.638624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.648108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.648269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.648286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.657851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.658011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.658029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.667535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.667695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.667712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.677221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.677380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.119 [2024-11-07 10:54:31.677402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.119 [2024-11-07 10:54:31.687183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.119 [2024-11-07 10:54:31.687343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.687361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.696854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.697012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.697029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.706572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.706731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.706749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.716264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.716423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.716445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.725936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.726096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.726114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.735658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.735816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.745316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.745483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.745501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.755020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.755178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.755196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.764714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.764878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.764896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.774407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.774574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.774592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.120 [2024-11-07 10:54:31.784236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.120 [2024-11-07 10:54:31.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.120 [2024-11-07 10:54:31.784421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.379 [2024-11-07 10:54:31.794151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.794312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.794330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.803888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.804046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.804065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.813592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.813751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.813768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.823286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.823448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.823466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.832951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.833111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.833130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.842687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.842846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.852340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.852503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.852521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.862062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.862220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.862237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.871727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.871886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.871904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.881419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.881587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.891130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.891289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.900800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.900961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.900978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 25785.00 IOPS, 100.72 MiB/s [2024-11-07T09:54:32.051Z] [2024-11-07 10:54:31.910500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.910659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.910677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.920278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.920440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.920457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.929969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.930130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.930154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.939894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.940047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.940065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.949560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.949720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.949738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.959333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.959503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.959522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.969265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.969427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.969454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.978987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.979148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.979166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.988702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.988866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.988883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:31.998403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:31.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:31.998589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:32.008030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:32.008193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:32.008210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:32.017811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.380 [2024-11-07 10:54:32.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.380 [2024-11-07 10:54:32.017998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.380 [2024-11-07 10:54:32.027566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.381 [2024-11-07 10:54:32.027727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.381 [2024-11-07 10:54:32.027744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.381 [2024-11-07 10:54:32.037266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.381 [2024-11-07 10:54:32.037425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.381 [2024-11-07 10:54:32.037448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.047176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.047343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.047363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.056989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.057147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.057166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.066698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.066859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.066877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.076400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.076566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.076584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.086104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.086265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.086282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.095929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.096088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.096106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.105631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.105791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.105808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.115303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.115469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.115486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.124995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.125155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.125173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.134669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.134827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.134845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.144381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.144545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.144563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.154063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.154222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.154241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.163779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.163938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.173459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.173637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.183177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.183336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.640 [2024-11-07 10:54:32.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.640 [2024-11-07 10:54:32.193087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.640 [2024-11-07 10:54:32.193247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.193265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.202823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.202983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.203001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.212511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.212671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.212689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.222211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.222370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.222388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.231929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.232089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.232108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.241614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.241774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.241792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.251232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.251393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.251410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.260943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.261106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.261126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.270628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.270785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.280361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.280528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.280546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.290058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.290220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.290237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.641 [2024-11-07 10:54:32.299770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.641 [2024-11-07 10:54:32.299930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.641 [2024-11-07 10:54:32.299947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.309699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.309857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.309879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.319476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.319644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.319662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.329131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.329293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.329310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.338866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.339025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.339044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.348553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.348714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.348731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.358241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.358396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.358413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.367953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.368112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.377653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.377813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.377830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.387366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.387535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.387553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.397096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.397258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.406810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.406973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.406991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.416514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.416676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.416694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.426193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.426352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.426370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.435939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.436100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.436118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.445894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.446055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.446072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.455580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.455741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.455758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.901 [2024-11-07 10:54:32.465294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.901 [2024-11-07 10:54:32.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.901 [2024-11-07 10:54:32.465472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.474977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.475137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.475155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.484653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.484813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.484831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.494349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.494513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.494531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.504017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.504178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.504196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.513892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.514054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.514073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.523878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.524045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.533578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.533739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.543358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.543524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.543543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.553077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.553236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.553254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.902 [2024-11-07 10:54:32.562776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:04.902 [2024-11-07 10:54:32.562953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.902 [2024-11-07 10:54:32.562971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.161 [2024-11-07 10:54:32.572778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.161 [2024-11-07 10:54:32.572951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.161 [2024-11-07 10:54:32.572970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.161 [2024-11-07 10:54:32.582779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.161 [2024-11-07 10:54:32.582943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.161 [2024-11-07 10:54:32.582962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.161 [2024-11-07 10:54:32.592573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.161 [2024-11-07 10:54:32.592735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.161 [2024-11-07 10:54:32.592754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.161 [2024-11-07 10:54:32.602316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.161 [2024-11-07 10:54:32.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.161 [2024-11-07 10:54:32.602502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.611993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.612160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.612178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.621810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.621968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.621986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.631492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.631652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.631670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.641200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.641363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.641382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.650929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.651094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.651111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.660626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.660790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.660808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.670339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.670507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.670525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.680049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.680208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.680227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.689796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.689957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.699763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.699927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.699944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.709554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.709717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.709735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.719231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.719392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.719410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.728944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.729106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.729123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.738644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.738802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.738821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.748350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.748525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.748543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.758044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.758208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.758225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.767733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.767910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.767928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.777451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.777610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.787181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.787348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.787366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.796870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.797049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.806598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.806759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.806777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.816294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.162 [2024-11-07 10:54:32.816467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.162 [2024-11-07 10:54:32.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.162 [2024-11-07 10:54:32.826097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.163 [2024-11-07 10:54:32.826261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.163 [2024-11-07 10:54:32.826281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.836036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.836189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.836207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.845765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.845926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.845944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.855488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.855651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.855669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.865185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.865345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.865366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.874904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.875062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.875080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.884626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.884787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.884805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.894310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.894476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 [2024-11-07 10:54:32.904025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e500) with pdu=0x2000166fd640 00:26:05.422 [2024-11-07 10:54:32.904186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.422 [2024-11-07 10:54:32.904204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.422 26017.00 IOPS, 101.63 MiB/s 00:26:05.422 Latency(us) 00:26:05.422 [2024-11-07T09:54:33.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.422 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:05.422 nvme0n1 : 2.01 26019.59 101.64 0.00 0.00 4910.89 3575.99 12594.31 00:26:05.422 [2024-11-07T09:54:33.093Z] =================================================================================================================== 00:26:05.422 [2024-11-07T09:54:33.093Z] Total : 26019.59 101.64 0.00 0.00 4910.89 3575.99 12594.31 00:26:05.422 { 00:26:05.422 "results": [ 00:26:05.422 { 00:26:05.422 "job": "nvme0n1", 00:26:05.422 "core_mask": "0x2", 00:26:05.422 "workload": "randwrite", 00:26:05.422 "status": "finished", 00:26:05.422 "queue_depth": 128, 00:26:05.422 "io_size": 4096, 00:26:05.422 "runtime": 2.00595, 00:26:05.422 "iops": 26019.59171464892, 00:26:05.422 "mibps": 101.63903013534734, 00:26:05.422 "io_failed": 0, 00:26:05.422 "io_timeout": 0, 00:26:05.422 "avg_latency_us": 4910.8904826308535, 00:26:05.422 "min_latency_us": 3575.986086956522, 00:26:05.423 "max_latency_us": 12594.30956521739 00:26:05.423 } 00:26:05.423 ], 00:26:05.423 "core_count": 1 00:26:05.423 } 00:26:05.423 10:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:05.423 10:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:05.423 10:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:05.423 | .driver_specific 00:26:05.423 | .nvme_error 00:26:05.423 | .status_code 00:26:05.423 | .command_transient_transport_error' 00:26:05.423 10:54:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2831006 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2831006 ']' 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2831006 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2831006 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:05.681 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2831006' 00:26:05.682 killing process with pid 2831006 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2831006 00:26:05.682 Received shutdown signal, test time was about 2.000000 seconds 00:26:05.682 00:26:05.682 Latency(us) 00:26:05.682 [2024-11-07T09:54:33.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.682 [2024-11-07T09:54:33.353Z] =================================================================================================================== 00:26:05.682 [2024-11-07T09:54:33.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2831006 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2831497 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2831497 /var/tmp/bperf.sock 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2831497 ']' 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:05.682 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:05.941 [2024-11-07 10:54:33.383691] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:05.941 [2024-11-07 10:54:33.383741] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831497 ] 00:26:05.941 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:05.941 Zero copy mechanism will not be used. 00:26:05.941 [2024-11-07 10:54:33.446363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.941 [2024-11-07 10:54:33.488467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.941 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:05.941 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:05.941 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.941 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.200 10:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.459 nvme0n1 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:06.459 10:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:06.718 Zero copy mechanism will not be used. 00:26:06.718 Running I/O for 2 seconds... 00:26:06.718 [2024-11-07 10:54:34.192980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.718 [2024-11-07 10:54:34.193249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.718 [2024-11-07 10:54:34.193278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.199572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.199703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.199724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.207804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.208070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.208093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.214388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.214648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.214671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.219491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.219757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.219778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.224666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.224919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.224940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.230290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.230463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.230483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.236320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.236612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.242497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.242767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.242788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.250405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.250676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.250697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.257404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.257497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.257515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.264693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.264951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.272666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.272947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.272968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.278738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.279002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.279023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.284381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.284637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.284658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.290665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.290917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.290938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.296272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.296544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.296565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.302760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.303012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.303033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.309374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.309634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.309655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.315216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.315476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.315497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.320607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.320867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.320887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.327269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.327530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.327551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.333852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.334105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.334127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.341267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.341539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.341560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.346954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.347193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.719 [2024-11-07 10:54:34.347214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.719 [2024-11-07 10:54:34.351958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.719 [2024-11-07 10:54:34.352209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.352229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.357182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.357440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.357460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.362586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.362837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.362857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.367546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.367824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.372350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.372623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.372644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.377030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.377296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.377316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.720 [2024-11-07 10:54:34.381910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.720 [2024-11-07 10:54:34.382172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.720 [2024-11-07 10:54:34.382194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.386730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.386986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.387009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.391643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.391896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.391918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.396344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.396603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.396625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.400943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.401196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.401217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.405549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.405820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.405840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.410142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.410397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.410421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.980 [2024-11-07 10:54:34.415046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.980 [2024-11-07 10:54:34.415312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.980 [2024-11-07 10:54:34.415332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.419791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.420056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.420077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.424504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.424762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.424783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.429266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.429528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.429549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.434129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.434395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.434417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.438991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.439248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.444369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.444630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.444652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.450994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.451270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.457187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.457456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.457477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.462977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.463233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.463254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.468453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.468720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.468740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.473694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.473963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.473983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.479360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.479632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.479652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.485847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.486117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.486138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.492074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.492338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.492358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.497249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.497506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.497526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.502580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.502848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.507802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.508050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.508071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.513009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.513262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.513282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.518074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.518333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.518353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.522874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.523128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.523148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.527682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.527967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.532272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.532530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.532550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.536908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.537161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.537182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.541510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.541766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.541787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.981 [2024-11-07 10:54:34.546148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.981 [2024-11-07 10:54:34.546406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.981 [2024-11-07 10:54:34.546431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.550775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.551027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.551048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.555627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.555879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.555899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.560559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.560815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.560835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.565216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.565475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.565495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.570226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.570496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.570517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.575060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.575313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.575333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.579808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.580074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.580094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.584984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.585239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.585259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.590143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.590403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.590424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.596030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.596473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.596493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.601857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.602109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.602129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.607704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.607957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.607978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.613334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.613589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.613610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.619101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.619352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.619374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.624566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.624820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.624840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.630623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.630880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.630901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.636572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.636825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.636846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.982 [2024-11-07 10:54:34.642445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:06.982 [2024-11-07 10:54:34.642709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.982 [2024-11-07 10:54:34.642730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.647940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.648001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.653770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.654025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.654047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.659493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.659552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.659570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.665298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.665568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.670708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.670958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.677079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.677332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.677352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.685034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.685277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.685297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.692392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.692688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.699781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.700046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.700067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.706454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.706712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.706733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.713348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.713615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.713636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.719681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.719931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.719952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.727092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.727344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.727365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.734431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.734694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.734714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.741738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.741991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.742011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.748625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.748888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.755529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.755603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.755621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.762692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.762945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.769424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.769728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.243 [2024-11-07 10:54:34.769748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.243 [2024-11-07 10:54:34.776067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.243 [2024-11-07 10:54:34.776331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.783030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.783187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.783205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.788467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.788724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.788746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.793994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.794261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.794281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.799928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.800180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.800201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.807032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.807299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.813026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.813292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.813313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.818772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.819026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.819047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.823589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.823843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.823863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.828287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.828546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.828567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.833053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.833304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.833324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.837877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.838133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.838154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.842590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.842844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.842865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.847270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.847524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.847545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.852509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.852771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.852792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.857287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.857551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.857571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.861883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.862141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.862162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.866541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.866795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.866816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.871179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.871440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.871461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.876021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.876274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.880634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.880888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.880909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.885416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.885679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.885700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.890748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.890999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.891020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.895664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.895917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.900736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.900991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.244 [2024-11-07 10:54:34.901011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.244 [2024-11-07 10:54:34.906576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.244 [2024-11-07 10:54:34.906866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.245 [2024-11-07 10:54:34.906887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.504 [2024-11-07 10:54:34.912637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.504 [2024-11-07 10:54:34.912905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.504 [2024-11-07 10:54:34.912927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.504 [2024-11-07 10:54:34.917819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.504 [2024-11-07 10:54:34.918071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.504 [2024-11-07 10:54:34.918092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.504 [2024-11-07 10:54:34.922800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.504 [2024-11-07 10:54:34.923053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.504 [2024-11-07 10:54:34.923073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.504 [2024-11-07 10:54:34.928048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.504 [2024-11-07 10:54:34.928303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.504 [2024-11-07 10:54:34.928323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.932940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.933193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.933214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.937822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.938105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.942835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.943098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.943118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.948214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.948477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.948498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.953822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.954077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.954097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.959346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.959612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.959634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.965696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.965951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.965972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.971272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.971530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.971551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.977424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.977692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.977713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.982932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.983183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.983203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.988341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.988604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.988626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.993454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.993709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.993731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:34.999032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:34.999285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:34.999305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.004926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.005180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.005201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.010735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.010793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.010811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.016457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.016710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.016730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.021848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.022098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.022118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.027034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.027285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.027306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.032977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.033228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.033248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.038374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.038644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.038665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.044201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.044466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.044486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.049897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.050150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.050171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.055593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.055856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.055877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.505 [2024-11-07 10:54:35.060894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.505 [2024-11-07 10:54:35.061159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.505 [2024-11-07 10:54:35.061179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.066510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.066777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.066797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.071972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.072030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.072048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.077795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.078058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.083185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.083443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.083467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.088360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.088631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.088652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.093525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.093777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.093797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.099104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.099367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.099388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.104495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.104745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.104766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.109847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.110098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.115371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.115637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.115658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.121124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.121376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.126217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.126474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.126494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.131164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.131414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.135842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.136097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.136117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.140495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.140748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.140769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.145248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.145504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.145525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.149843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.150096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.150116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.154479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.154738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.154759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.159069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.159321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.159343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.163639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.163888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.163909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.506 [2024-11-07 10:54:35.168299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.506 [2024-11-07 10:54:35.168571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.506 [2024-11-07 10:54:35.168597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.172995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.173248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.173269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.177634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.177911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.177933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.768 5522.00 IOPS, 690.25 MiB/s [2024-11-07T09:54:35.439Z] [2024-11-07 10:54:35.183163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.183428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.183455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.187818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.188072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.188094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.192448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.192713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.192734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.197105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.197370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.197389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.201665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.201919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.201940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.206251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.206509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.210947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.211223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.215715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.215972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.215994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.220364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.220623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.220645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.224976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.225229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.225249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.229540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.229793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.229814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.234099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.234351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.234372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.238699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.768 [2024-11-07 10:54:35.238950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.768 [2024-11-07 10:54:35.238971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.768 [2024-11-07 10:54:35.243236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.243495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.243517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.247777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.248029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.252325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.252581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.252601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.257274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.257532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.257552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.262760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.263046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.268026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.268101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.268119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.273713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.273950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.273971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.279149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.279383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.279404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.284326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.284567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.284587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.289921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.290156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.290177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.294829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.295062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.295087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.299601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.299834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.299855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.304709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.304946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.304966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.309782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.310016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.310036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.315305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.315547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.315568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.320601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.320838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.320859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.325419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.325663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.325683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.330147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.330383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.330403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.334581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.334819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.334839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.339111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.339353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.339373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.343721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.343959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.769 [2024-11-07 10:54:35.343979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.769 [2024-11-07 10:54:35.348359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.769 [2024-11-07 10:54:35.348722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.348744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.353075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.353312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.353332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.357708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.357943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.357963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.362114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.362351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.362371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.366514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.366752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.366772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.370846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.371084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.371104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.375212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.375452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.375472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.379576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.379812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.379832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.383954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.384189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.384210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.388577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.388815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.388836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.394131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.394455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.400592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.400868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.400890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.406825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.407133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.407155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.413324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.413625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.413646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.419778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.420040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.420060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.426190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.426461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.770 [2024-11-07 10:54:35.432473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:07.770 [2024-11-07 10:54:35.432736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.770 [2024-11-07 10:54:35.432760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.031 [2024-11-07 10:54:35.438707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.031 [2024-11-07 10:54:35.439005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.031 [2024-11-07 10:54:35.439028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.031 [2024-11-07 10:54:35.444752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.031 [2024-11-07 10:54:35.445028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.445051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.451103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.451356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.451379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.457636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.457939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.457961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.463945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.464214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.470372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.470633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.470655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.476679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.476976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.476998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.482815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.488431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.488666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.488688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.493583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.493811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.493832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.499131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.499357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.499379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.504477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.504705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.504726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.509100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.509328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.509350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.513631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.513859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.513880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.518089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.518316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.518337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.522483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.522714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.522739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.527097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.527320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.527340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.532151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.532372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.532393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.536487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.536709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.536730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.540837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.541063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.541085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.545148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.545371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.549474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.032 [2024-11-07 10:54:35.549706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.032 [2024-11-07 10:54:35.549726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.032 [2024-11-07 10:54:35.553775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.554021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.558080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.558302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.558322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.562405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.562644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.562663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.566723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.566946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.566967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.571085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.571310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.571331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.575876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.576098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.576119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.580201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.580422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.580449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.584912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.585134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.585154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.590706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.591000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.591021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.596749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.596973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.596993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.601966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.602197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.602218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.606590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.606828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.611920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.612212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.612233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.617942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.618222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.618243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.623301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.623530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.623551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.628369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.628596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.628616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.632752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.632974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.632995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.637100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.637323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.637343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.641471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.641705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.641726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.645917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.646140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.033 [2024-11-07 10:54:35.646164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.033 [2024-11-07 10:54:35.650371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.033 [2024-11-07 10:54:35.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.650621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.654724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.654944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.654964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.659028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.659252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.659272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.663306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.663537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.663557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.667589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.667813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.667834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.671886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.672109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.672129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.676159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.676383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.676404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.680443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.680677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.680697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.684777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.685008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.685028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.689090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.689311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.689333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.034 [2024-11-07 10:54:35.693373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.034 [2024-11-07 10:54:35.693615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.034 [2024-11-07 10:54:35.693639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.697805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.698036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.698059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.702133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.702355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.702377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.706473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.706698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.710837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.711062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.711083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.715221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.715457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.715479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.719727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.719968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.720005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.724286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.724515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.724535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.729217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.729444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.729464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.733854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.734078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.734099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.738446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.738670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.738690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.743256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.743485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.743506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.748331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.748571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.748593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.753474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.753699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.753719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.758245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.758474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.758494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.762964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.763188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.763213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.767720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.767946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.767966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.772339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.772568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.772588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.777034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.294 [2024-11-07 10:54:35.777257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.294 [2024-11-07 10:54:35.777278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.294 [2024-11-07 10:54:35.781666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.781890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.781910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.786157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.786382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.786403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.790883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.791107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.791127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.796228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.801752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.801975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.806753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.806978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.806999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.811482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.811704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.811725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.816187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.816410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.816431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.820771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.820998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.821019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.825077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.825302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.825322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.829375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.829604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.829625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.833705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.833928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.833949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.838036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.838260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.838280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.842580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.842804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.842828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.847451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.847677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.847696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.851766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.851988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.852009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.856066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.856291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.856311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.860337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.860584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.295 [2024-11-07 10:54:35.864653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.295 [2024-11-07 10:54:35.864876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.295 [2024-11-07 10:54:35.864896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.868941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.869185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.873206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.873429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.873458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.877509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.877737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.877758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.881814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.882044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.882065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.886549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.886794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.886815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.892257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.892542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.892563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.898777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.899000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.899022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.905716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.906054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.906074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.913179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.913503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.913525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.920978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.921271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.921293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.928697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.928993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.929014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.936612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.936859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.936880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.942815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.943042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.943063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.947986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.948208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.948229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.953111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.953338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.953359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.296 [2024-11-07 10:54:35.957853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.296 [2024-11-07 10:54:35.958092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.296 [2024-11-07 10:54:35.958115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.962417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.962651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.962671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.967337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.967573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.967596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.972127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.972355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.972377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.976895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.977118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.977139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.981887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.982112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.982136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.986702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.986940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.986961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.991152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.991378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.991399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.995527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:35.995755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:35.995776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:35.999889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.000142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.000163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.004318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.004547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.004568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.008661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.008884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.008905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.013010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.013234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.013254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.017338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.017564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.017585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.021650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.021880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.021901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.026014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.026235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.026256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.556 [2024-11-07 10:54:36.030352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.556 [2024-11-07 10:54:36.030582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.556 [2024-11-07 10:54:36.030603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.034707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.034932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.034953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.039239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.039471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.039490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.044213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.044443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.044463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.049377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.049628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.054541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.054765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.054785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.060554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.060779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.060799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.065424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.065656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.065676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.070073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.070295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.070316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.074619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.074842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.074863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.079131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.079356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.079376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.083732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.083954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.083975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.088340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.088571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.088591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.092974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.093197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.093218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.097534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.097759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.097780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.102204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.102427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.102457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.107120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.107341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.107362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.112040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.112264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.112284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.116710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.116935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.116956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.121097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.121318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.121338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.125510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.125733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.125754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.129918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.130140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.130161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.134290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.134517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.134537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.138847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.139071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.139091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.143452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.557 [2024-11-07 10:54:36.143678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.557 [2024-11-07 10:54:36.143698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.557 [2024-11-07 10:54:36.148597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.148822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.148842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.153928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.154154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.154175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.159387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.159613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.159634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.164685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.164908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.164928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.169412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.169644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.169664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.174042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.174268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.174289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.558 [2024-11-07 10:54:36.178561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.178788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.178810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.558 5929.00 IOPS, 741.12 MiB/s [2024-11-07T09:54:36.229Z] [2024-11-07 10:54:36.184572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x149e9e0) with pdu=0x2000166fef90 00:26:08.558 [2024-11-07 10:54:36.184628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.558 [2024-11-07 10:54:36.184651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.558 00:26:08.558 Latency(us) 00:26:08.558 [2024-11-07T09:54:36.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.558 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:08.558 nvme0n1 : 2.00 5925.83 740.73 0.00 0.00 2695.44 1837.86 13506.11 00:26:08.558 [2024-11-07T09:54:36.229Z] =================================================================================================================== 00:26:08.558 [2024-11-07T09:54:36.229Z] Total : 5925.83 740.73 0.00 0.00 2695.44 1837.86 13506.11 00:26:08.558 { 00:26:08.558 "results": [ 00:26:08.558 { 00:26:08.558 "job": "nvme0n1", 00:26:08.558 "core_mask": "0x2", 00:26:08.558 "workload": "randwrite", 00:26:08.558 "status": "finished", 00:26:08.558 "queue_depth": 16, 00:26:08.558 "io_size": 131072, 00:26:08.558 "runtime": 2.003938, 00:26:08.558 "iops": 5925.832036719699, 00:26:08.558 "mibps": 740.7290045899624, 00:26:08.558 "io_failed": 0, 00:26:08.558 "io_timeout": 0, 00:26:08.558 "avg_latency_us": 2695.443191212815, 00:26:08.558 "min_latency_us": 1837.8573913043479, 00:26:08.558 "max_latency_us": 13506.114782608696 00:26:08.558 } 00:26:08.558 ], 00:26:08.558 "core_count": 1 00:26:08.558 } 00:26:08.558 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:08.558 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:08.558 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:08.558 | .driver_specific 00:26:08.558 | .nvme_error 00:26:08.558 | .status_code 00:26:08.558 | .command_transient_transport_error' 00:26:08.558 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2831497 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2831497 ']' 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2831497 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2831497 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2831497' 00:26:08.816 killing process with pid 2831497 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2831497 00:26:08.816 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.816 00:26:08.816 Latency(us) 00:26:08.816 [2024-11-07T09:54:36.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.816 [2024-11-07T09:54:36.487Z] =================================================================================================================== 00:26:08.816 [2024-11-07T09:54:36.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.816 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2831497 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2829834 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2829834 ']' 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2829834 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2829834 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2829834' 00:26:09.075 killing process with pid 2829834 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2829834 00:26:09.075 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2829834 00:26:09.336 00:26:09.336 real 0m13.801s 00:26:09.336 user 0m26.439s 00:26:09.336 sys 0m4.521s 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.336 ************************************ 00:26:09.336 END TEST nvmf_digest_error 00:26:09.336 ************************************ 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.336 rmmod nvme_tcp 00:26:09.336 rmmod nvme_fabrics 00:26:09.336 rmmod nvme_keyring 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2829834 ']' 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2829834 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2829834 ']' 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2829834 00:26:09.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2829834) - No such process 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2829834 is not found' 00:26:09.336 Process with pid 2829834 is not found 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.336 10:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.873 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:11.873 00:26:11.873 real 0m35.604s 00:26:11.873 user 0m54.704s 00:26:11.873 sys 0m13.241s 00:26:11.873 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:11.873 10:54:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:11.873 ************************************ 00:26:11.873 END TEST nvmf_digest 00:26:11.873 ************************************ 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.873 ************************************ 00:26:11.873 START TEST nvmf_bdevperf 00:26:11.873 ************************************ 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:11.873 * Looking for test storage... 00:26:11.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:11.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.873 --rc genhtml_branch_coverage=1 00:26:11.873 --rc genhtml_function_coverage=1 00:26:11.873 --rc genhtml_legend=1 00:26:11.873 --rc geninfo_all_blocks=1 00:26:11.873 --rc geninfo_unexecuted_blocks=1 00:26:11.873 00:26:11.873 ' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:11.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.873 --rc genhtml_branch_coverage=1 00:26:11.873 --rc genhtml_function_coverage=1 00:26:11.873 --rc genhtml_legend=1 00:26:11.873 --rc geninfo_all_blocks=1 00:26:11.873 --rc geninfo_unexecuted_blocks=1 00:26:11.873 00:26:11.873 ' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:11.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.873 --rc genhtml_branch_coverage=1 00:26:11.873 --rc genhtml_function_coverage=1 00:26:11.873 --rc genhtml_legend=1 00:26:11.873 --rc geninfo_all_blocks=1 00:26:11.873 --rc geninfo_unexecuted_blocks=1 00:26:11.873 00:26:11.873 ' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:11.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:11.873 --rc genhtml_branch_coverage=1 00:26:11.873 --rc genhtml_function_coverage=1 00:26:11.873 --rc genhtml_legend=1 00:26:11.873 --rc geninfo_all_blocks=1 00:26:11.873 --rc geninfo_unexecuted_blocks=1 00:26:11.873 00:26:11.873 ' 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.873 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:11.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:11.874 10:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.146 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:17.147 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:17.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:17.147 Found net devices under 0000:86:00.0: cvl_0_0 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:17.147 Found net devices under 0000:86:00.1: cvl_0_1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:26:17.147 00:26:17.147 --- 10.0.0.2 ping statistics --- 00:26:17.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.147 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:26:17.147 00:26:17.147 --- 10.0.0.1 ping statistics --- 00:26:17.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.147 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2835500 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2835500 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2835500 ']' 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:17.147 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.147 [2024-11-07 10:54:44.756338] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:17.147 [2024-11-07 10:54:44.756394] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.406 [2024-11-07 10:54:44.824046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:17.406 [2024-11-07 10:54:44.867390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.406 [2024-11-07 10:54:44.867426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.406 [2024-11-07 10:54:44.867437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.406 [2024-11-07 10:54:44.867444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.406 [2024-11-07 10:54:44.867449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.406 [2024-11-07 10:54:44.868855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.406 [2024-11-07 10:54:44.868922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.406 [2024-11-07 10:54:44.868924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.406 10:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 [2024-11-07 10:54:45.005110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 Malloc0 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.406 [2024-11-07 10:54:45.068646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.406 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:17.664 { 00:26:17.664 "params": { 00:26:17.664 "name": "Nvme$subsystem", 00:26:17.664 "trtype": "$TEST_TRANSPORT", 00:26:17.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.664 "adrfam": "ipv4", 00:26:17.664 "trsvcid": "$NVMF_PORT", 00:26:17.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.664 "hdgst": ${hdgst:-false}, 00:26:17.664 "ddgst": ${ddgst:-false} 00:26:17.664 }, 00:26:17.664 "method": "bdev_nvme_attach_controller" 00:26:17.664 } 00:26:17.664 EOF 00:26:17.664 )") 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:17.664 10:54:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:17.664 "params": { 00:26:17.664 "name": "Nvme1", 00:26:17.664 "trtype": "tcp", 00:26:17.664 "traddr": "10.0.0.2", 00:26:17.664 "adrfam": "ipv4", 00:26:17.664 "trsvcid": "4420", 00:26:17.665 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.665 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.665 "hdgst": false, 00:26:17.665 "ddgst": false 00:26:17.665 }, 00:26:17.665 "method": "bdev_nvme_attach_controller" 00:26:17.665 }' 00:26:17.665 [2024-11-07 10:54:45.120019] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:17.665 [2024-11-07 10:54:45.120059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835529 ] 00:26:17.665 [2024-11-07 10:54:45.184193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.665 [2024-11-07 10:54:45.225278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.922 Running I/O for 1 seconds... 00:26:18.857 10974.00 IOPS, 42.87 MiB/s 00:26:18.857 Latency(us) 00:26:18.857 [2024-11-07T09:54:46.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.857 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:18.857 Verification LBA range: start 0x0 length 0x4000 00:26:18.857 Nvme1n1 : 1.00 11054.25 43.18 0.00 0.00 11536.04 1595.66 12195.39 00:26:18.857 [2024-11-07T09:54:46.528Z] =================================================================================================================== 00:26:18.857 [2024-11-07T09:54:46.528Z] Total : 11054.25 43.18 0.00 0.00 11536.04 1595.66 12195.39 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2835760 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:19.115 { 00:26:19.115 "params": { 00:26:19.115 "name": "Nvme$subsystem", 00:26:19.115 "trtype": "$TEST_TRANSPORT", 00:26:19.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:19.115 "adrfam": "ipv4", 00:26:19.115 "trsvcid": "$NVMF_PORT", 00:26:19.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:19.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:19.115 "hdgst": ${hdgst:-false}, 00:26:19.115 "ddgst": ${ddgst:-false} 00:26:19.115 }, 00:26:19.115 "method": "bdev_nvme_attach_controller" 00:26:19.115 } 00:26:19.115 EOF 00:26:19.115 )") 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:19.115 10:54:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:19.115 "params": { 00:26:19.115 "name": "Nvme1", 00:26:19.115 "trtype": "tcp", 00:26:19.115 "traddr": "10.0.0.2", 00:26:19.115 "adrfam": "ipv4", 00:26:19.115 "trsvcid": "4420", 00:26:19.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:19.115 "hdgst": false, 00:26:19.115 "ddgst": false 00:26:19.115 }, 00:26:19.115 "method": "bdev_nvme_attach_controller" 00:26:19.115 }' 00:26:19.115 [2024-11-07 10:54:46.640427] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:19.115 [2024-11-07 10:54:46.640495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835760 ] 00:26:19.115 [2024-11-07 10:54:46.704471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.115 [2024-11-07 10:54:46.743593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.373 Running I/O for 15 seconds... 00:26:21.360 10802.00 IOPS, 42.20 MiB/s [2024-11-07T09:54:49.968Z] 10809.50 IOPS, 42.22 MiB/s [2024-11-07T09:54:49.968Z] 10:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2835500 00:26:22.297 10:54:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:22.297 [2024-11-07 10:54:49.608852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.297 [2024-11-07 10:54:49.608895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.608912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.608921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.608932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.608947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.608956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.608964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.608972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.608989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.608997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.297 [2024-11-07 10:54:49.609246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.297 [2024-11-07 10:54:49.609255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.298 [2024-11-07 10:54:49.609984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.298 [2024-11-07 10:54:49.609991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.609999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.299 [2024-11-07 10:54:49.610413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.299 [2024-11-07 10:54:49.610429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.299 [2024-11-07 10:54:49.610616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.299 [2024-11-07 10:54:49.610623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.300 [2024-11-07 10:54:49.610955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.610972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.610988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.610996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.611003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.611012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.611019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.611028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.611035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.611043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.300 [2024-11-07 10:54:49.611050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.611058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecdd80 is same with the state(6) to be set 00:26:22.300 [2024-11-07 10:54:49.611069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.300 [2024-11-07 10:54:49.611074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.300 [2024-11-07 10:54:49.611082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:26:22.300 [2024-11-07 10:54:49.611090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.300 [2024-11-07 10:54:49.613958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.300 [2024-11-07 10:54:49.614016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.300 [2024-11-07 10:54:49.614628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.300 [2024-11-07 10:54:49.614680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.300 [2024-11-07 10:54:49.614706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.300 [2024-11-07 10:54:49.615290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.300 [2024-11-07 10:54:49.615566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.300 [2024-11-07 10:54:49.615575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.300 [2024-11-07 10:54:49.615584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.300 [2024-11-07 10:54:49.615592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.300 [2024-11-07 10:54:49.627186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.300 [2024-11-07 10:54:49.627639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.300 [2024-11-07 10:54:49.627689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.300 [2024-11-07 10:54:49.627712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.300 [2024-11-07 10:54:49.628294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.300 [2024-11-07 10:54:49.628611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.300 [2024-11-07 10:54:49.628621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.300 [2024-11-07 10:54:49.628628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.300 [2024-11-07 10:54:49.628635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.300 [2024-11-07 10:54:49.640154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.300 [2024-11-07 10:54:49.640519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.300 [2024-11-07 10:54:49.640536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.300 [2024-11-07 10:54:49.640545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.640724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.640889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.640899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.640906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.640912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.653032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.653471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.653518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.653543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.654125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.654725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.654763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.654770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.654777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.665844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.666203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.666221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.666228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.666392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.666565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.666575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.666581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.666587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.678688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.679111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.679149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.679176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.679709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.679874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.679882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.679889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.679895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.691537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.691956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.691973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.691983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.692147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.692311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.692320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.692327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.692333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.704376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.704829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.704854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.705019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.705185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.705194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.705200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.705207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.717328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.717608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.717626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.717633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.717798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.717963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.717972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.717979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.717985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.730197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.730616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.730642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.730807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.730975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.730985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.730991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.301 [2024-11-07 10:54:49.730998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.301 [2024-11-07 10:54:49.743110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.301 [2024-11-07 10:54:49.743611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.301 [2024-11-07 10:54:49.743658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.301 [2024-11-07 10:54:49.743683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.301 [2024-11-07 10:54:49.744028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.301 [2024-11-07 10:54:49.744193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.301 [2024-11-07 10:54:49.744203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.301 [2024-11-07 10:54:49.744209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.744216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.756041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.756448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.756467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.756476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.756639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.756805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.756815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.756821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.756828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.769227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.769622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.769640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.769649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.769827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.770006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.770016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.770028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.770035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.782414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.782790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.782798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.782976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.783155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.783165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.783173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.783181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.795631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.795941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.795959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.795968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.796151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.796336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.796346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.796353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.796360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.808750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.809123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.809141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.809149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.809328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.809513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.809523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.809530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.809537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.821939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.822369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.822417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.822454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.822998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.823164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.823173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.823180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.823187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.834865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.835152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.835170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.835179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.835343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.835517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.835527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.835533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.835540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.847917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.848209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.848228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.848236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.848414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.848601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.848611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.848618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.848625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.861015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.861377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.861395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.861407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.302 [2024-11-07 10:54:49.861594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.302 [2024-11-07 10:54:49.861774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.302 [2024-11-07 10:54:49.861784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.302 [2024-11-07 10:54:49.861791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.302 [2024-11-07 10:54:49.861798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.302 [2024-11-07 10:54:49.874360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.302 [2024-11-07 10:54:49.874731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.302 [2024-11-07 10:54:49.874753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.302 [2024-11-07 10:54:49.874763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.874959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.875156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.875167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.875176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.875184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.887743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.888208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.888228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.888237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.888443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.888640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.888650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.888658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.888666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.900965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.901407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.901440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.901624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.901813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.901823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.901830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.901837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.914331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.914742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.914760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.914769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.914953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.915138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.915148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.915156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.915163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.927531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.927894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.927912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.927920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.928099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.928278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.928288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.928295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.928301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.940719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.941182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.941229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.941254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.941847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.942419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.942429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.942446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.942454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.303 [2024-11-07 10:54:49.953863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.303 [2024-11-07 10:54:49.954294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.303 [2024-11-07 10:54:49.954341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.303 [2024-11-07 10:54:49.954366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.303 [2024-11-07 10:54:49.954958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.303 [2024-11-07 10:54:49.955466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.303 [2024-11-07 10:54:49.955477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.303 [2024-11-07 10:54:49.955484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.303 [2024-11-07 10:54:49.955491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:49.967044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:49.967425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:49.967487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:49.967513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:49.968094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:49.968551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:49.968563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:49.968570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:49.968577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:49.980212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:49.980656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:49.980674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:49.980683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:49.980862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:49.981042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:49.981052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:49.981060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:49.981068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 9424.67 IOPS, 36.82 MiB/s [2024-11-07T09:54:50.235Z] [2024-11-07 10:54:49.994660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:49.994970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:49.994988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:49.994997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:49.995182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:49.995367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:49.995377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:49.995386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:49.995393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.007899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.008370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.008390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.008399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.008607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.008807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.008818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.008826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.008834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.021054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.021475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.021494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.021502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.021681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.021861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.021872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.021879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.021887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.034107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.034463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.034495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.034673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.034851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.034861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.034869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.034876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.047248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.047543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.047561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.047570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.047747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.047927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.047937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.047944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.047951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.060308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.060646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.060664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.060672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.060850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.061030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.061041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.061048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.061056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.073419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.073793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.073811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.073820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.074001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.074181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.074191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.074198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.074205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.086469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.086838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.086856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.086865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.087044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.087223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.087233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.087239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.087246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.099634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.100031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.564 [2024-11-07 10:54:50.100058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.564 [2024-11-07 10:54:50.100237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.564 [2024-11-07 10:54:50.100416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.564 [2024-11-07 10:54:50.100426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.564 [2024-11-07 10:54:50.100437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.564 [2024-11-07 10:54:50.100444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.564 [2024-11-07 10:54:50.112789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.564 [2024-11-07 10:54:50.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.564 [2024-11-07 10:54:50.113155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.113163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.113335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.113534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.113560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.113576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.113585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.125928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.126307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.126353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.126379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.126887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.127062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.127072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.127079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.127085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.138921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.139285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.139342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.139367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.139924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.140131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.140143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.140154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.140164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.152469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.152808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.152826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.152834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.153012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.153203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.153212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.153219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.153227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.165562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.166007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.166051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.166075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.166470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.166645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.166655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.166662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.166669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.178579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.179024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.179070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.179095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.179691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.180193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.180202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.180209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.180217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.191618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.192022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.192040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.192048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.192220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.192395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.192405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.192412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.192418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.204759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.205203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.205256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.205281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.205879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.206220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.206230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.206237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.206244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.565 [2024-11-07 10:54:50.217819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.565 [2024-11-07 10:54:50.218281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.565 [2024-11-07 10:54:50.218326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.565 [2024-11-07 10:54:50.218351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.565 [2024-11-07 10:54:50.218875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.565 [2024-11-07 10:54:50.219058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.565 [2024-11-07 10:54:50.219068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.565 [2024-11-07 10:54:50.219075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.565 [2024-11-07 10:54:50.219082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.231024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.231384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.231402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.231410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.231595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.231775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.231785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.231792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.231799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.243994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.244448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.244456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.244632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.244808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.244818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.244824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.244832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.257000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.257346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.257364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.257373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.257552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.257726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.257736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.257743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.257750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.270037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.270443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.270461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.270469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.270642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.270817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.270827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.270833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.270840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.283015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.283444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.283461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.283469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.283650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.283826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.283835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.283847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.283855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.296093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.296440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.296458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.296466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.296639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.296812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.296821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.296828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.296835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.309083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.309518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.309535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.309543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.309716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.309892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.309902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.309909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.309915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.322203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.322623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.322641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.322649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.322834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.323006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.825 [2024-11-07 10:54:50.323016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.825 [2024-11-07 10:54:50.323023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.825 [2024-11-07 10:54:50.323029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.825 [2024-11-07 10:54:50.335361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.825 [2024-11-07 10:54:50.335827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.825 [2024-11-07 10:54:50.335873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.825 [2024-11-07 10:54:50.335898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.825 [2024-11-07 10:54:50.336410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.825 [2024-11-07 10:54:50.336590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.336601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.336608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.336615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.348449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.348800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.348844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.348869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.349342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.349523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.349534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.349540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.349547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.361427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.361757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.361775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.361783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.361956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.362131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.362141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.362148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.362154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.374394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.374781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.374802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.374811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.374984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.375158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.375167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.375174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.375182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.387490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.387911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.387930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.387938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.388115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.388294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.388304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.388311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.388318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.400671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.401029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.401074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.401098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.401575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.401755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.401765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.401773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.401780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.413623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.413968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.413985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.413993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.414169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.414343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.414353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.414359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.414367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.426624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.426990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.427034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.427059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.427612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.427787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.427797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.427804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.427810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.439688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.440109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.440127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.440135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.440313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.440498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.440509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.440516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.440523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.452871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.453267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.453312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.826 [2024-11-07 10:54:50.453337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.826 [2024-11-07 10:54:50.453826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.826 [2024-11-07 10:54:50.454001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.826 [2024-11-07 10:54:50.454011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.826 [2024-11-07 10:54:50.454024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.826 [2024-11-07 10:54:50.454031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.826 [2024-11-07 10:54:50.465967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.826 [2024-11-07 10:54:50.466401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.826 [2024-11-07 10:54:50.466418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-11-07 10:54:50.466427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.827 [2024-11-07 10:54:50.466606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.827 [2024-11-07 10:54:50.466780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-11-07 10:54:50.466789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-11-07 10:54:50.466796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-11-07 10:54:50.466803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:22.827 [2024-11-07 10:54:50.478987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:22.827 [2024-11-07 10:54:50.479418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.827 [2024-11-07 10:54:50.479478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:22.827 [2024-11-07 10:54:50.479503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:22.827 [2024-11-07 10:54:50.480085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:22.827 [2024-11-07 10:54:50.480682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:22.827 [2024-11-07 10:54:50.480711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:22.827 [2024-11-07 10:54:50.480733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:22.827 [2024-11-07 10:54:50.480753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.492079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.492465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.492484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.492493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.492671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.492851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.492861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.492868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.492875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.505113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.505566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.505611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.505636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.506033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.506207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.506217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.506224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.506231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.518066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.518497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.518548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.518572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.519085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.519259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.519269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.519276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.519284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.531122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.531468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.531486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.531494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.531667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.531841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.531851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.531858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.531864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.544094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.544458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.544479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.544486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.544660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.544836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.544845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.544852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.544859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.557066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.557495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.557514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.557522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.557694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.557868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.557877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.557884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.557891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.570093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.570442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.570461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.570469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.570643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.570818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.570827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.570836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.570843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.583095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.583558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.583603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.583627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.087 [2024-11-07 10:54:50.584216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.087 [2024-11-07 10:54:50.584569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.087 [2024-11-07 10:54:50.584579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.087 [2024-11-07 10:54:50.584586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.087 [2024-11-07 10:54:50.584593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.087 [2024-11-07 10:54:50.596679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.087 [2024-11-07 10:54:50.597117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.087 [2024-11-07 10:54:50.597162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.087 [2024-11-07 10:54:50.597186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.597782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.598196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.598205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.598212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.598218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.609529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.609953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.609970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.609977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.610140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.610304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.610314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.610320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.610327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.622569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.622985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.623002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.623010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.623183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.623363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.623373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.623383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.623391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.635490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.635880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.635898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.635907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.636071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.636235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.636244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.636251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.636258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.648548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.648913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.648932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.648941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.649115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.649289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.649299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.649308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.649316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.661393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.661729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.661747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.661755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.661918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.662083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.662092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.662099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.662105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.674202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.674564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.674581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.674589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.674753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.674917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.674926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.674933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.674940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.687043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.687467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.687485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.687493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.687656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.687826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.687837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.687843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.687849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.699917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.700361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.700407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.700431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.701028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.701390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.701399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.701406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.701413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.712743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.713165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.713185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.088 [2024-11-07 10:54:50.713193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.088 [2024-11-07 10:54:50.713356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.088 [2024-11-07 10:54:50.713527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.088 [2024-11-07 10:54:50.713537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.088 [2024-11-07 10:54:50.713543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.088 [2024-11-07 10:54:50.713550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.088 [2024-11-07 10:54:50.725645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.088 [2024-11-07 10:54:50.726064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.088 [2024-11-07 10:54:50.726112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.089 [2024-11-07 10:54:50.726137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.089 [2024-11-07 10:54:50.726731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.089 [2024-11-07 10:54:50.727317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.089 [2024-11-07 10:54:50.727343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.089 [2024-11-07 10:54:50.727363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.089 [2024-11-07 10:54:50.727371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.089 [2024-11-07 10:54:50.738472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.089 [2024-11-07 10:54:50.738896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-07 10:54:50.738913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.089 [2024-11-07 10:54:50.738920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.089 [2024-11-07 10:54:50.739084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.089 [2024-11-07 10:54:50.739250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.089 [2024-11-07 10:54:50.739259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.089 [2024-11-07 10:54:50.739266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.089 [2024-11-07 10:54:50.739272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.089 [2024-11-07 10:54:50.751670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.089 [2024-11-07 10:54:50.752047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.089 [2024-11-07 10:54:50.752064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.089 [2024-11-07 10:54:50.752072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.089 [2024-11-07 10:54:50.752245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.089 [2024-11-07 10:54:50.752425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.089 [2024-11-07 10:54:50.752440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.089 [2024-11-07 10:54:50.752448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.089 [2024-11-07 10:54:50.752455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.764564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.764992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.765037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.765061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.765496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.765661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.765670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.765677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.765683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.777343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.777770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.777787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.777795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.777959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.778124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.778133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.778139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.778146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.790266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.790685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.790694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.790857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.791022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.791031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.791042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.791049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.803269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.803695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.803713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.803721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.803886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.804049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.804059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.804065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.804072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.816169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.816605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.816650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.816675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.817253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.817849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.817877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.817908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.817915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.829037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.829458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.829475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.829484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.829647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.829812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.829821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.829827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.829834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.841948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.842316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.842333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.842341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.842530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.842716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.842725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.842732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.842738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.854870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.855258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.855303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.855330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.855822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.855998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.856007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.349 [2024-11-07 10:54:50.856015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.349 [2024-11-07 10:54:50.856022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.349 [2024-11-07 10:54:50.867699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.349 [2024-11-07 10:54:50.868041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.349 [2024-11-07 10:54:50.868059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.349 [2024-11-07 10:54:50.868066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.349 [2024-11-07 10:54:50.868231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.349 [2024-11-07 10:54:50.868396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.349 [2024-11-07 10:54:50.868405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.868411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.868417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.880635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.881047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.881064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.881074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.881238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.881402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.881412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.881419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.881425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.893551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.893925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.893943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.893951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.894114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.894279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.894288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.894294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.894301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.906625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.907039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.907057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.907065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.907238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.907411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.907421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.907428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.907441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.919422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.919834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.919850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.919873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.920454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.920623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.920633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.920640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.920646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.932225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.932666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.932683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.932691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.932854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.933017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.933026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.933033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.933040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.945145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.945591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.945610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.945618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.945791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.945965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.945974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.945981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.945987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.957986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.958413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.958469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.958495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.958938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.959104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.959111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.959121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.959128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.970866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.971293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.971311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.971319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.971491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.971657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.971667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.971674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.971680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 [2024-11-07 10:54:50.983775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.984198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.984215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.984223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.984386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.984557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.350 [2024-11-07 10:54:50.984566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.350 [2024-11-07 10:54:50.984573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.350 [2024-11-07 10:54:50.984579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.350 7068.50 IOPS, 27.61 MiB/s [2024-11-07T09:54:51.021Z] [2024-11-07 10:54:50.997425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.350 [2024-11-07 10:54:50.997860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.350 [2024-11-07 10:54:50.997906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.350 [2024-11-07 10:54:50.997930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.350 [2024-11-07 10:54:50.998295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.350 [2024-11-07 10:54:50.998530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.351 [2024-11-07 10:54:50.998544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.351 [2024-11-07 10:54:50.998554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.351 [2024-11-07 10:54:50.998564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.351 [2024-11-07 10:54:51.010890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.351 [2024-11-07 10:54:51.011331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.351 [2024-11-07 10:54:51.011348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.351 [2024-11-07 10:54:51.011357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.351 [2024-11-07 10:54:51.011544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.351 [2024-11-07 10:54:51.011724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.351 [2024-11-07 10:54:51.011733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.351 [2024-11-07 10:54:51.011741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.351 [2024-11-07 10:54:51.011748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.023933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.024307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.024325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.024333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.024513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.024697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.024706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.024713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.024720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.036849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.037274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.037319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.037344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.037939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.038477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.038505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.038516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.038527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.050384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.050799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.050819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.050827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.050996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.051165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.051175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.051181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.051188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.063213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.063556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.063573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.063581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.063746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.063910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.063919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.063926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.063933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.076240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.076585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.076625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.076652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.077167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.077342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.077351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.077358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.077365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.089152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.089551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.089568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.089576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.089744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.089909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.089918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.089925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.089932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.102057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.102333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.102351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.102359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.102529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.102694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.102704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.102710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.102717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.115139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.115529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.115547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.115555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.115728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.115902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.115912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.115920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.115927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.128108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.128544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.611 [2024-11-07 10:54:51.128563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.611 [2024-11-07 10:54:51.128571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.611 [2024-11-07 10:54:51.128744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.611 [2024-11-07 10:54:51.128920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.611 [2024-11-07 10:54:51.128931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.611 [2024-11-07 10:54:51.128941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.611 [2024-11-07 10:54:51.128949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.611 [2024-11-07 10:54:51.140903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.611 [2024-11-07 10:54:51.141336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.141407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.142003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.142386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.142396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.142402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.142409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.154053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.154507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.154526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.154534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.154714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.154895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.154905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.154913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.154922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.167153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.167572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.167591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.167599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.167772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.167948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.167959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.167966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.167973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.180003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.180464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.180509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.180534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.181114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.181712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.181740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.181761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.181793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.192843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.193254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.193299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.193323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.193920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.194468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.194478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.194484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.194491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.205670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.206080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.206098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.206106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.206270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.206439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.206448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.206455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.206461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.218578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.218854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.218874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.218882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.219045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.219209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.219218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.219225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.219231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.231453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.231851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.231868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.231876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.232039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.232203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.232212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.232219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.232225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.244342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.244680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.244750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.245332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.245929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.245965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.245972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.245978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.257173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.257614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.612 [2024-11-07 10:54:51.257632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.612 [2024-11-07 10:54:51.257641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.612 [2024-11-07 10:54:51.257818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.612 [2024-11-07 10:54:51.257993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.612 [2024-11-07 10:54:51.258003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.612 [2024-11-07 10:54:51.258010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.612 [2024-11-07 10:54:51.258017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.612 [2024-11-07 10:54:51.270038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.612 [2024-11-07 10:54:51.270442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.613 [2024-11-07 10:54:51.270459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.613 [2024-11-07 10:54:51.270467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.613 [2024-11-07 10:54:51.270632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.613 [2024-11-07 10:54:51.270795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.613 [2024-11-07 10:54:51.270805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.613 [2024-11-07 10:54:51.270812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.613 [2024-11-07 10:54:51.270818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.282981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.283459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.283505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.283530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.284112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.284705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.284733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.284754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.284773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.295853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.296216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.296261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.296286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.296766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.296941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.296951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.296961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.296969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.308686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.309046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.309062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.309070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.309233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.309397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.309406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.309413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.309419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.321541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.321894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.321910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.321918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.322080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.322245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.322255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.322262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.322268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.334338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.334684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.334743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.334770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.335352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.335837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.335846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.335853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.335860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.347296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.347725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.347743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.347751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.347913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.348077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.348087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.348093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.348100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.360201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.360547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.360593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.360619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.361092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.361257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.361267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.361273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.361280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.373034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.373466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.373512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.373537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.373946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.374111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.873 [2024-11-07 10:54:51.374120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.873 [2024-11-07 10:54:51.374127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.873 [2024-11-07 10:54:51.374133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.873 [2024-11-07 10:54:51.385890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.873 [2024-11-07 10:54:51.386305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.873 [2024-11-07 10:54:51.386328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.873 [2024-11-07 10:54:51.386335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.873 [2024-11-07 10:54:51.386505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.873 [2024-11-07 10:54:51.386670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.386679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.386686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.386693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.398823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.399106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.399123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.399131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.399294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.399465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.399474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.399481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.399488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.411731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.412169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.412213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.412237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.412727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.412984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.412997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.413008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.413018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.425222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.425596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.425614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.425622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.425798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.425972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.425982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.425989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.425996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.438036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.438384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.438402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.438409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.438578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.438742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.438751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.438758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.438764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.450891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.451309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.451355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.451380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.451971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.452394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.452403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.452410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.452417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.463694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.464041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.464057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.464064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.464228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.464392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.464401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.464411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.464419] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.476626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.476952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.476970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.476978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.477141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.477306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.477315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.477322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.477329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.489457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.489805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.489822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.489830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.489993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.490157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.490167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.490174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.490180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.502295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.502597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.502615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.502623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.502787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.502950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.874 [2024-11-07 10:54:51.502959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.874 [2024-11-07 10:54:51.502966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.874 [2024-11-07 10:54:51.502973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.874 [2024-11-07 10:54:51.515090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.874 [2024-11-07 10:54:51.515455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.874 [2024-11-07 10:54:51.515473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.874 [2024-11-07 10:54:51.515481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.874 [2024-11-07 10:54:51.515644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.874 [2024-11-07 10:54:51.515808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.875 [2024-11-07 10:54:51.515817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.875 [2024-11-07 10:54:51.515824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.875 [2024-11-07 10:54:51.515831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:23.875 [2024-11-07 10:54:51.527883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:23.875 [2024-11-07 10:54:51.528303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.875 [2024-11-07 10:54:51.528354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:23.875 [2024-11-07 10:54:51.528378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:23.875 [2024-11-07 10:54:51.528972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:23.875 [2024-11-07 10:54:51.529202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:23.875 [2024-11-07 10:54:51.529211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:23.875 [2024-11-07 10:54:51.529218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:23.875 [2024-11-07 10:54:51.529224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.134 [2024-11-07 10:54:51.540972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.134 [2024-11-07 10:54:51.541406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.134 [2024-11-07 10:54:51.541470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.134 [2024-11-07 10:54:51.541497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.134 [2024-11-07 10:54:51.541993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.134 [2024-11-07 10:54:51.542159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.134 [2024-11-07 10:54:51.542168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.134 [2024-11-07 10:54:51.542174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.134 [2024-11-07 10:54:51.542181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.134 [2024-11-07 10:54:51.553917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.134 [2024-11-07 10:54:51.554339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.134 [2024-11-07 10:54:51.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.134 [2024-11-07 10:54:51.554415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.134 [2024-11-07 10:54:51.555013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.134 [2024-11-07 10:54:51.555252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.134 [2024-11-07 10:54:51.555261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.134 [2024-11-07 10:54:51.555268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.134 [2024-11-07 10:54:51.555274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.134 [2024-11-07 10:54:51.566706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.134 [2024-11-07 10:54:51.567154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.567200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.567224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.567819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.568417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.568426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.568437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.568443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.579564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.580000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.580017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.580025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.580189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.580352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.580361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.580368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.580374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.592494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.592914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.592931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.592940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.593106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.593272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.593281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.593287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.593294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.605326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.605753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.605771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.605778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.605942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.606106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.606116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.606123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.606129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.618235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.618672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.618717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.618741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.619285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.619457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.619466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.619473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.619480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.631099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.631452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.631469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.631478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.631642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.631806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.631815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.631825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.631832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.643937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.644360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.644377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.644385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.644555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.644721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.644730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.644736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.644743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.656807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.657236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.657283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.657308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.657723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.657889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.657898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.657905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.657912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.669708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.670135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.670180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.670205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.670714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.670881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.670891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.670899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.670906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.682727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.135 [2024-11-07 10:54:51.683097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.135 [2024-11-07 10:54:51.683114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.135 [2024-11-07 10:54:51.683122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.135 [2024-11-07 10:54:51.683284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.135 [2024-11-07 10:54:51.683453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.135 [2024-11-07 10:54:51.683463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.135 [2024-11-07 10:54:51.683470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.135 [2024-11-07 10:54:51.683477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.135 [2024-11-07 10:54:51.695559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.695989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.696007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.696014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.696179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.696344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.696353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.696360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.696367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.708506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.708904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.708921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.708929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.709092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.709257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.709266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.709273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.709280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.721378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.721746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.721775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.721938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.722103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.722113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.722119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.722126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.734295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.734716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.734765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.734789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.735370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.735676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.735686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.735693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.735699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.747170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.747534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.747581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.747606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.748188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.748786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.748796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.748802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.748809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.759981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.760399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.760416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.760424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.760598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.760764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.760773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.760779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.760786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.772885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.773303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.773320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.773328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.773499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.773664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.773673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.773680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.773686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.785714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.786134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.786151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.786159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.786322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.786493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.786503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.786510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.786517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.136 [2024-11-07 10:54:51.798853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.136 [2024-11-07 10:54:51.799295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.136 [2024-11-07 10:54:51.799313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.136 [2024-11-07 10:54:51.799321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.136 [2024-11-07 10:54:51.799507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.136 [2024-11-07 10:54:51.799687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.136 [2024-11-07 10:54:51.799697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.136 [2024-11-07 10:54:51.799708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.136 [2024-11-07 10:54:51.799715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.811774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.812199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.812216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.396 [2024-11-07 10:54:51.812223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.396 [2024-11-07 10:54:51.812386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.396 [2024-11-07 10:54:51.812557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.396 [2024-11-07 10:54:51.812566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.396 [2024-11-07 10:54:51.812573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.396 [2024-11-07 10:54:51.812580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.824679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.825106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.825150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.396 [2024-11-07 10:54:51.825175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.396 [2024-11-07 10:54:51.825775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.396 [2024-11-07 10:54:51.825986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.396 [2024-11-07 10:54:51.825994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.396 [2024-11-07 10:54:51.826002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.396 [2024-11-07 10:54:51.826010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.837500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.837778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.837795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.396 [2024-11-07 10:54:51.837803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.396 [2024-11-07 10:54:51.837965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.396 [2024-11-07 10:54:51.838130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.396 [2024-11-07 10:54:51.838139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.396 [2024-11-07 10:54:51.838145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.396 [2024-11-07 10:54:51.838152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.850447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.850811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.850856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.396 [2024-11-07 10:54:51.850880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.396 [2024-11-07 10:54:51.851474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.396 [2024-11-07 10:54:51.852057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.396 [2024-11-07 10:54:51.852067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.396 [2024-11-07 10:54:51.852074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.396 [2024-11-07 10:54:51.852081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.863416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.863830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.863848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.396 [2024-11-07 10:54:51.863856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.396 [2024-11-07 10:54:51.864029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.396 [2024-11-07 10:54:51.864203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.396 [2024-11-07 10:54:51.864213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.396 [2024-11-07 10:54:51.864220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.396 [2024-11-07 10:54:51.864226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.396 [2024-11-07 10:54:51.876360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.396 [2024-11-07 10:54:51.876724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.396 [2024-11-07 10:54:51.876742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.876750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.876914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.877078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.877088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.877095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.877103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.889222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.889579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.889600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.889608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.889771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.889936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.889945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.889951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.889958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.902046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.902400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.902417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.902425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.902594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.902759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.902768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.902775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.902781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.914902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.915300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.915316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.915324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.915898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.916065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.916074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.916080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.916086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.927837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.928170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.928187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.928194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.928362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.928535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.928546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.928553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.928559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.940873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.941148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.941165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.941174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.941348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.941530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.941540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.941547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.941555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.953815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.954245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.954262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.954269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.954440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.954605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.954614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.954620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.954627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.966695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.967178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.967203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.967803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.968333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.968343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.968354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.968362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.979553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.979981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.980026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.980050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.980422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.980595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.980604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.980611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.980618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 [2024-11-07 10:54:51.992422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:51.992870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.397 [2024-11-07 10:54:51.992916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.397 [2024-11-07 10:54:51.992941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.397 [2024-11-07 10:54:51.993389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.397 [2024-11-07 10:54:51.993565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.397 [2024-11-07 10:54:51.993576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.397 [2024-11-07 10:54:51.993583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.397 [2024-11-07 10:54:51.993590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.397 5654.80 IOPS, 22.09 MiB/s [2024-11-07T09:54:52.068Z] [2024-11-07 10:54:52.005332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.397 [2024-11-07 10:54:52.005687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.398 [2024-11-07 10:54:52.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.398 [2024-11-07 10:54:52.005713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.398 [2024-11-07 10:54:52.005877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.398 [2024-11-07 10:54:52.006042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.398 [2024-11-07 10:54:52.006051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.398 [2024-11-07 10:54:52.006057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.398 [2024-11-07 10:54:52.006064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.398 [2024-11-07 10:54:52.018173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.398 [2024-11-07 10:54:52.018604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.398 [2024-11-07 10:54:52.018650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.398 [2024-11-07 10:54:52.018674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.398 [2024-11-07 10:54:52.019256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.398 [2024-11-07 10:54:52.019421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.398 [2024-11-07 10:54:52.019430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.398 [2024-11-07 10:54:52.019443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.398 [2024-11-07 10:54:52.019449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.398 [2024-11-07 10:54:52.031030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.398 [2024-11-07 10:54:52.031367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.398 [2024-11-07 10:54:52.031384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.398 [2024-11-07 10:54:52.031392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.398 [2024-11-07 10:54:52.031560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.398 [2024-11-07 10:54:52.031726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.398 [2024-11-07 10:54:52.031735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.398 [2024-11-07 10:54:52.031741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.398 [2024-11-07 10:54:52.031748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.398 [2024-11-07 10:54:52.043847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.398 [2024-11-07 10:54:52.044200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.398 [2024-11-07 10:54:52.044217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.398 [2024-11-07 10:54:52.044225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.398 [2024-11-07 10:54:52.044388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.398 [2024-11-07 10:54:52.044558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.398 [2024-11-07 10:54:52.044568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.398 [2024-11-07 10:54:52.044574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.398 [2024-11-07 10:54:52.044581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.398 [2024-11-07 10:54:52.056694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.398 [2024-11-07 10:54:52.057095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.398 [2024-11-07 10:54:52.057117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.398 [2024-11-07 10:54:52.057124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.398 [2024-11-07 10:54:52.057287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.398 [2024-11-07 10:54:52.057457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.398 [2024-11-07 10:54:52.057466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.398 [2024-11-07 10:54:52.057473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.398 [2024-11-07 10:54:52.057480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.657 [2024-11-07 10:54:52.069783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.657 [2024-11-07 10:54:52.070158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.657 [2024-11-07 10:54:52.070175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.657 [2024-11-07 10:54:52.070182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.070346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.070515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.070525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.070532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.070539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.082628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.082961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.082978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.082986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.083148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.083313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.083322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.083329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.083335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.095529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.095956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.095972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.095980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.096147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.096312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.096321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.096327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.096334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.108440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.108862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.108920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.108944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.109460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.109627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.109637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.109643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.109650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.121279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.121708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.121726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.121734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.121897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.122061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.122070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.122076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.122083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.134073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.134476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.134501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.134665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.134829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.134841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.134848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.134854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.146881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.147282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.147306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.147476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.147641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.147650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.147657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.147663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.159797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.160147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.160165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.160173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.160337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.160506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.160516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.160522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.160529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.172625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.173045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.173085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.173111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.173674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.173840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.173850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.173857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.173864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.185450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.185882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.658 [2024-11-07 10:54:52.185901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.658 [2024-11-07 10:54:52.185909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.658 [2024-11-07 10:54:52.186072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.658 [2024-11-07 10:54:52.186238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.658 [2024-11-07 10:54:52.186247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.658 [2024-11-07 10:54:52.186254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.658 [2024-11-07 10:54:52.186260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.658 [2024-11-07 10:54:52.198507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.658 [2024-11-07 10:54:52.198858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.198886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.198894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.199067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.199240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.199250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.199257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.199264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.211398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.211800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.211817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.211825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.211989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.212154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.212163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.212170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.212177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.224292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.224553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.224575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.224583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.224756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.224931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.224940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.224947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.224954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.237118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.237498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.237544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.237568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.237827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.237992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.238002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.238009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.238016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.249984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.250332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.250340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.250509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.250674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.250684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.250690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.250696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.262907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.263329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.263380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.263405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.264006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.264172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.264182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.264189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.264195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.275745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.276151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.276168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.276175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.276338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.276508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.276518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.276524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.276530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.288632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.289110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.289135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.289732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.290238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.290247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.290254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.290261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.301543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.301962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.301979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.301987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.302151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.302316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.302326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.302336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.302342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.659 [2024-11-07 10:54:52.314446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.659 [2024-11-07 10:54:52.314861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.659 [2024-11-07 10:54:52.314906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.659 [2024-11-07 10:54:52.314929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.659 [2024-11-07 10:54:52.315404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.659 [2024-11-07 10:54:52.315575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.659 [2024-11-07 10:54:52.315586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.659 [2024-11-07 10:54:52.315592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.659 [2024-11-07 10:54:52.315600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.919 [2024-11-07 10:54:52.327401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.919 [2024-11-07 10:54:52.327835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.919 [2024-11-07 10:54:52.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.919 [2024-11-07 10:54:52.327905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.919 [2024-11-07 10:54:52.328424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.919 [2024-11-07 10:54:52.328625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.919 [2024-11-07 10:54:52.328635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.919 [2024-11-07 10:54:52.328643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.919 [2024-11-07 10:54:52.328650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.919 [2024-11-07 10:54:52.340212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.919 [2024-11-07 10:54:52.340564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.919 [2024-11-07 10:54:52.340581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.919 [2024-11-07 10:54:52.340589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.919 [2024-11-07 10:54:52.340752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.919 [2024-11-07 10:54:52.340916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.919 [2024-11-07 10:54:52.340925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.919 [2024-11-07 10:54:52.340931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.919 [2024-11-07 10:54:52.340938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.919 [2024-11-07 10:54:52.353049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.919 [2024-11-07 10:54:52.353411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.919 [2024-11-07 10:54:52.353427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.919 [2024-11-07 10:54:52.353440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.919 [2024-11-07 10:54:52.353605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.919 [2024-11-07 10:54:52.353770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.919 [2024-11-07 10:54:52.353779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.919 [2024-11-07 10:54:52.353785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.919 [2024-11-07 10:54:52.353792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.919 [2024-11-07 10:54:52.365919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.919 [2024-11-07 10:54:52.366276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.919 [2024-11-07 10:54:52.366314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.919 [2024-11-07 10:54:52.366342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.919 [2024-11-07 10:54:52.366938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.919 [2024-11-07 10:54:52.367187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.919 [2024-11-07 10:54:52.367196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.919 [2024-11-07 10:54:52.367202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.919 [2024-11-07 10:54:52.367209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.919 [2024-11-07 10:54:52.378902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.919 [2024-11-07 10:54:52.379255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.919 [2024-11-07 10:54:52.379272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.919 [2024-11-07 10:54:52.379280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.919 [2024-11-07 10:54:52.379470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.919 [2024-11-07 10:54:52.379636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.919 [2024-11-07 10:54:52.379646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.379652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.379659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.391758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.392171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.392224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.392249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.392847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.393406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.393419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.393429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.393443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.405266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.405674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.405691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.405699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.405866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.406035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.406044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.406051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.406057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.418092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.418531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.418576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.418600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.419180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.419778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.419806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.419830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.419861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.430963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.431381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.431466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.432056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.432539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.432549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.432556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.432563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.443754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.444196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.444213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.444222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.444385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.444555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.444566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.444573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.444581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.456933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.457306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.457325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.457334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.457531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.457712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.457722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.457729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.457736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.469723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.470119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.470136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.470144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.470307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.470477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.470486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.470497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.470503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.482596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.482944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.482961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.482968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.483132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.483297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.483306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.483312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.483319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.495425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.495826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.495843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.495851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.496014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.496178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.920 [2024-11-07 10:54:52.496187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.920 [2024-11-07 10:54:52.496193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.920 [2024-11-07 10:54:52.496200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.920 [2024-11-07 10:54:52.508228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.920 [2024-11-07 10:54:52.508639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.920 [2024-11-07 10:54:52.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.920 [2024-11-07 10:54:52.508664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.920 [2024-11-07 10:54:52.508827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.920 [2024-11-07 10:54:52.508992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.509001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.509008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.509015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.921 [2024-11-07 10:54:52.521123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.921 [2024-11-07 10:54:52.521478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.921 [2024-11-07 10:54:52.521525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.921 [2024-11-07 10:54:52.521548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.921 [2024-11-07 10:54:52.522131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.921 [2024-11-07 10:54:52.522571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.522580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.522587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.522594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.921 [2024-11-07 10:54:52.534023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.921 [2024-11-07 10:54:52.534444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.921 [2024-11-07 10:54:52.534461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.921 [2024-11-07 10:54:52.534469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.921 [2024-11-07 10:54:52.534631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.921 [2024-11-07 10:54:52.534794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.534804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.534811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.534817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.921 [2024-11-07 10:54:52.546922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.921 [2024-11-07 10:54:52.547327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.921 [2024-11-07 10:54:52.547371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.921 [2024-11-07 10:54:52.547396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.921 [2024-11-07 10:54:52.547869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.921 [2024-11-07 10:54:52.548035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.548045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.548051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.548058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.921 [2024-11-07 10:54:52.559722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.921 [2024-11-07 10:54:52.560070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.921 [2024-11-07 10:54:52.560091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.921 [2024-11-07 10:54:52.560099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.921 [2024-11-07 10:54:52.560262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.921 [2024-11-07 10:54:52.560426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.560440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.560447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.560455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:24.921 [2024-11-07 10:54:52.572589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:24.921 [2024-11-07 10:54:52.572866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.921 [2024-11-07 10:54:52.572884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:24.921 [2024-11-07 10:54:52.572891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:24.921 [2024-11-07 10:54:52.573055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:24.921 [2024-11-07 10:54:52.573219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:24.921 [2024-11-07 10:54:52.573228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:24.921 [2024-11-07 10:54:52.573234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:24.921 [2024-11-07 10:54:52.573241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.180 [2024-11-07 10:54:52.585781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.180 [2024-11-07 10:54:52.586131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.180 [2024-11-07 10:54:52.586149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.180 [2024-11-07 10:54:52.586157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.180 [2024-11-07 10:54:52.586335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.180 [2024-11-07 10:54:52.586520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.180 [2024-11-07 10:54:52.586531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.180 [2024-11-07 10:54:52.586539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.180 [2024-11-07 10:54:52.586546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.180 [2024-11-07 10:54:52.598763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.180 [2024-11-07 10:54:52.599144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.180 [2024-11-07 10:54:52.599160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.180 [2024-11-07 10:54:52.599168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.180 [2024-11-07 10:54:52.599335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.180 [2024-11-07 10:54:52.599505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.180 [2024-11-07 10:54:52.599515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.180 [2024-11-07 10:54:52.599522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.180 [2024-11-07 10:54:52.599528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2835500 Killed "${NVMF_APP[@]}" "$@" 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.180 [2024-11-07 10:54:52.611948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.180 [2024-11-07 10:54:52.612317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.180 [2024-11-07 10:54:52.612336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.180 [2024-11-07 10:54:52.612344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.180 [2024-11-07 10:54:52.612527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.180 [2024-11-07 10:54:52.612707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.180 [2024-11-07 10:54:52.612717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.180 [2024-11-07 10:54:52.612725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.180 [2024-11-07 10:54:52.612731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2836716 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2836716 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2836716 ']' 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:25.180 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.180 [2024-11-07 10:54:52.625110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.180 [2024-11-07 10:54:52.625491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.180 [2024-11-07 10:54:52.625510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.180 [2024-11-07 10:54:52.625523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.180 [2024-11-07 10:54:52.625702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.180 [2024-11-07 10:54:52.625880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.180 [2024-11-07 10:54:52.625890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.180 [2024-11-07 10:54:52.625897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.180 [2024-11-07 10:54:52.625904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.180 [2024-11-07 10:54:52.638279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.180 [2024-11-07 10:54:52.638626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.180 [2024-11-07 10:54:52.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.180 [2024-11-07 10:54:52.638653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.180 [2024-11-07 10:54:52.638832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.180 [2024-11-07 10:54:52.639011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.180 [2024-11-07 10:54:52.639021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.639029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.639036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.651379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.651670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.651679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.651851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.652025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.652034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.652041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.652048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.661019] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:25.181 [2024-11-07 10:54:52.661060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.181 [2024-11-07 10:54:52.664390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.664779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.664797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.664810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.664983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.665158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.665167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.665175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.665181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.677389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.677784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.677802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.677811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.677985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.678159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.678169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.678176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.678183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.690484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.690831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.690857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.691031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.691205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.691214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.691222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.691229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.703642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.703944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.703962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.703971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.704150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.704332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.704344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.704351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.704358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.716757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.717101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.717119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.717127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.717306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.717491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.717501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.717510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.717518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.728792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:25.181 [2024-11-07 10:54:52.729786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.730150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.730168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.730177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.730351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.730531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.730541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.730549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.730556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.742903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.743284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.743305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.743314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.743493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.743668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.743678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.743690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.743697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.755919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.756327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.756346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.756354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.756533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.756707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.756716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.756723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.756730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.768891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.769304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.769322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.769330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.769509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.769685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.769695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.769702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.769710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.771003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.181 [2024-11-07 10:54:52.771030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.181 [2024-11-07 10:54:52.771038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.181 [2024-11-07 10:54:52.771045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.181 [2024-11-07 10:54:52.771050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.181 [2024-11-07 10:54:52.772461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.181 [2024-11-07 10:54:52.772552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.181 [2024-11-07 10:54:52.772554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.181 [2024-11-07 10:54:52.782072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.782516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.782538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.782551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.782733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.782913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.782923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.782932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.782940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.795169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.795538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.181 [2024-11-07 10:54:52.795558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.181 [2024-11-07 10:54:52.795568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.181 [2024-11-07 10:54:52.795749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.181 [2024-11-07 10:54:52.795929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.181 [2024-11-07 10:54:52.795939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.181 [2024-11-07 10:54:52.795947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.181 [2024-11-07 10:54:52.795955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.181 [2024-11-07 10:54:52.808336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.181 [2024-11-07 10:54:52.808705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.182 [2024-11-07 10:54:52.808728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.182 [2024-11-07 10:54:52.808738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.182 [2024-11-07 10:54:52.808918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.182 [2024-11-07 10:54:52.809100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.182 [2024-11-07 10:54:52.809110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.182 [2024-11-07 10:54:52.809118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.182 [2024-11-07 10:54:52.809127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.182 [2024-11-07 10:54:52.821513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.182 [2024-11-07 10:54:52.821832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.182 [2024-11-07 10:54:52.821855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.182 [2024-11-07 10:54:52.821865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.182 [2024-11-07 10:54:52.822045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.182 [2024-11-07 10:54:52.822233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.182 [2024-11-07 10:54:52.822244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.182 [2024-11-07 10:54:52.822252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.182 [2024-11-07 10:54:52.822260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.182 [2024-11-07 10:54:52.834642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.182 [2024-11-07 10:54:52.835048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.182 [2024-11-07 10:54:52.835069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.182 [2024-11-07 10:54:52.835078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.182 [2024-11-07 10:54:52.835257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.182 [2024-11-07 10:54:52.835442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.182 [2024-11-07 10:54:52.835453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.182 [2024-11-07 10:54:52.835462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.182 [2024-11-07 10:54:52.835470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.440 [2024-11-07 10:54:52.847830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.440 [2024-11-07 10:54:52.848209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.440 [2024-11-07 10:54:52.848227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.440 [2024-11-07 10:54:52.848236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.440 [2024-11-07 10:54:52.848414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.440 [2024-11-07 10:54:52.848599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.440 [2024-11-07 10:54:52.848610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.440 [2024-11-07 10:54:52.848617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.440 [2024-11-07 10:54:52.848625] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.440 [2024-11-07 10:54:52.861007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.440 [2024-11-07 10:54:52.861306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.440 [2024-11-07 10:54:52.861323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.440 [2024-11-07 10:54:52.861332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.440 [2024-11-07 10:54:52.861515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.440 [2024-11-07 10:54:52.861695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.440 [2024-11-07 10:54:52.861705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.440 [2024-11-07 10:54:52.861718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.440 [2024-11-07 10:54:52.861726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.440 [2024-11-07 10:54:52.874110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.440 [2024-11-07 10:54:52.874468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.440 [2024-11-07 10:54:52.874487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.440 [2024-11-07 10:54:52.874496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.440 [2024-11-07 10:54:52.874675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.440 [2024-11-07 10:54:52.874855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.440 [2024-11-07 10:54:52.874866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.440 [2024-11-07 10:54:52.874873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.440 [2024-11-07 10:54:52.874880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.440 [2024-11-07 10:54:52.887247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.440 [2024-11-07 10:54:52.887551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.440 [2024-11-07 10:54:52.887570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.440 [2024-11-07 10:54:52.887578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.440 [2024-11-07 10:54:52.887756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.440 [2024-11-07 10:54:52.887936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.440 [2024-11-07 10:54:52.887947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.440 [2024-11-07 10:54:52.887953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.440 [2024-11-07 10:54:52.887961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.440 [2024-11-07 10:54:52.900353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.440 [2024-11-07 10:54:52.900708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.440 [2024-11-07 10:54:52.900728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.440 [2024-11-07 10:54:52.900736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.440 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.440 [2024-11-07 10:54:52.900917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.440 [2024-11-07 10:54:52.901098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.440 [2024-11-07 10:54:52.901108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.440 [2024-11-07 10:54:52.901115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.901122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 [2024-11-07 10:54:52.903989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.441 [2024-11-07 10:54:52.913498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 [2024-11-07 10:54:52.913789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.441 [2024-11-07 10:54:52.913808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.441 [2024-11-07 10:54:52.913816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.441 [2024-11-07 10:54:52.913995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.441 [2024-11-07 10:54:52.914174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.441 [2024-11-07 10:54:52.914184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.441 [2024-11-07 10:54:52.914191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.914198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 [2024-11-07 10:54:52.926587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 [2024-11-07 10:54:52.926903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.441 [2024-11-07 10:54:52.926922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.441 [2024-11-07 10:54:52.926930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.441 [2024-11-07 10:54:52.927109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.441 [2024-11-07 10:54:52.927289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.441 [2024-11-07 10:54:52.927299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.441 [2024-11-07 10:54:52.927306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.927313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 [2024-11-07 10:54:52.939699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 [2024-11-07 10:54:52.940072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.441 [2024-11-07 10:54:52.940096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.441 [2024-11-07 10:54:52.940105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.441 [2024-11-07 10:54:52.940283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.441 [2024-11-07 10:54:52.940467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.441 [2024-11-07 10:54:52.940478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.441 [2024-11-07 10:54:52.940485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.940492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 Malloc0 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.441 [2024-11-07 10:54:52.952758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 [2024-11-07 10:54:52.953055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.441 [2024-11-07 10:54:52.953074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.441 [2024-11-07 10:54:52.953082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.441 [2024-11-07 10:54:52.953261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.441 [2024-11-07 10:54:52.953446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.441 [2024-11-07 10:54:52.953456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.441 [2024-11-07 10:54:52.953463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.953470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:25.441 [2024-11-07 10:54:52.966030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 [2024-11-07 10:54:52.966424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.441 [2024-11-07 10:54:52.966447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca4510 with addr=10.0.0.2, port=4420 00:26:25.441 [2024-11-07 10:54:52.966456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca4510 is same with the state(6) to be set 00:26:25.441 [2024-11-07 10:54:52.966640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca4510 (9): Bad file descriptor 00:26:25.441 [2024-11-07 10:54:52.966820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:25.441 [2024-11-07 10:54:52.966812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.441 [2024-11-07 10:54:52.966831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:25.441 [2024-11-07 10:54:52.966840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:25.441 [2024-11-07 10:54:52.966847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.441 10:54:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2835760 00:26:25.441 [2024-11-07 10:54:52.979222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:25.441 4712.33 IOPS, 18.41 MiB/s [2024-11-07T09:54:53.112Z] [2024-11-07 10:54:53.084084] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:27.748 5462.86 IOPS, 21.34 MiB/s [2024-11-07T09:54:56.403Z] 6183.00 IOPS, 24.15 MiB/s [2024-11-07T09:54:57.378Z] 6700.89 IOPS, 26.18 MiB/s [2024-11-07T09:54:58.311Z] 7123.80 IOPS, 27.83 MiB/s [2024-11-07T09:54:59.245Z] 7481.73 IOPS, 29.23 MiB/s [2024-11-07T09:55:00.192Z] 7791.17 IOPS, 30.43 MiB/s [2024-11-07T09:55:01.126Z] 8034.23 IOPS, 31.38 MiB/s [2024-11-07T09:55:02.061Z] 8235.07 IOPS, 32.17 MiB/s [2024-11-07T09:55:02.061Z] 8409.60 IOPS, 32.85 MiB/s 00:26:34.390 Latency(us) 00:26:34.390 [2024-11-07T09:55:02.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.390 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:34.390 Verification LBA range: start 0x0 length 0x4000 00:26:34.390 Nvme1n1 : 15.01 8413.48 32.87 11097.51 0.00 6540.51 651.80 14360.93 00:26:34.390 [2024-11-07T09:55:02.061Z] =================================================================================================================== 00:26:34.390 [2024-11-07T09:55:02.061Z] Total : 8413.48 32.87 11097.51 0.00 6540.51 651.80 14360.93 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.649 rmmod nvme_tcp 00:26:34.649 rmmod nvme_fabrics 00:26:34.649 rmmod nvme_keyring 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2836716 ']' 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2836716 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2836716 ']' 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2836716 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:26:34.649 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:34.650 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2836716 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2836716' 00:26:34.909 killing process with pid 2836716 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2836716 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2836716 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:26:34.909 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.910 10:55:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:37.447 00:26:37.447 real 0m25.490s 00:26:37.447 user 1m0.435s 00:26:37.447 sys 0m6.336s 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:37.447 ************************************ 00:26:37.447 END TEST nvmf_bdevperf 00:26:37.447 ************************************ 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.447 ************************************ 00:26:37.447 START TEST nvmf_target_disconnect 00:26:37.447 ************************************ 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:37.447 * Looking for test storage... 00:26:37.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.447 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:37.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.448 --rc genhtml_branch_coverage=1 00:26:37.448 --rc genhtml_function_coverage=1 00:26:37.448 --rc genhtml_legend=1 00:26:37.448 --rc geninfo_all_blocks=1 00:26:37.448 --rc geninfo_unexecuted_blocks=1 00:26:37.448 00:26:37.448 ' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:37.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.448 --rc genhtml_branch_coverage=1 00:26:37.448 --rc genhtml_function_coverage=1 00:26:37.448 --rc genhtml_legend=1 00:26:37.448 --rc geninfo_all_blocks=1 00:26:37.448 --rc geninfo_unexecuted_blocks=1 00:26:37.448 00:26:37.448 ' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:37.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.448 --rc genhtml_branch_coverage=1 00:26:37.448 --rc genhtml_function_coverage=1 00:26:37.448 --rc genhtml_legend=1 00:26:37.448 --rc geninfo_all_blocks=1 00:26:37.448 --rc geninfo_unexecuted_blocks=1 00:26:37.448 00:26:37.448 ' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:37.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.448 --rc genhtml_branch_coverage=1 00:26:37.448 --rc genhtml_function_coverage=1 00:26:37.448 --rc genhtml_legend=1 00:26:37.448 --rc geninfo_all_blocks=1 00:26:37.448 --rc geninfo_unexecuted_blocks=1 00:26:37.448 00:26:37.448 ' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.448 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:37.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:37.449 10:55:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.721 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:42.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:42.722 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:42.722 Found net devices under 0000:86:00.0: cvl_0_0 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:42.722 Found net devices under 0000:86:00.1: cvl_0_1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:26:42.722 00:26:42.722 --- 10.0.0.2 ping statistics --- 00:26:42.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.722 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:26:42.722 00:26:42.722 --- 10.0.0.1 ping statistics --- 00:26:42.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.722 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.722 10:55:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:42.722 ************************************ 00:26:42.722 START TEST nvmf_target_disconnect_tc1 00:26:42.722 ************************************ 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:42.722 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.723 [2024-11-07 10:55:10.115443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.723 [2024-11-07 10:55:10.115498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1913ac0 with addr=10.0.0.2, port=4420 00:26:42.723 [2024-11-07 10:55:10.115524] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:42.723 [2024-11-07 10:55:10.115534] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:42.723 [2024-11-07 10:55:10.115541] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:42.723 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:42.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:42.723 Initializing NVMe Controllers 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:42.723 00:26:42.723 real 0m0.109s 00:26:42.723 user 0m0.054s 00:26:42.723 sys 0m0.054s 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:42.723 ************************************ 00:26:42.723 END TEST nvmf_target_disconnect_tc1 00:26:42.723 ************************************ 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:42.723 ************************************ 00:26:42.723 START TEST nvmf_target_disconnect_tc2 00:26:42.723 ************************************ 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2841845 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2841845 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2841845 ']' 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.723 10:55:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.723 [2024-11-07 10:55:10.248551] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:42.723 [2024-11-07 10:55:10.248596] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.723 [2024-11-07 10:55:10.330016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.723 [2024-11-07 10:55:10.372525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.723 [2024-11-07 10:55:10.372564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.723 [2024-11-07 10:55:10.372574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.723 [2024-11-07 10:55:10.372580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.723 [2024-11-07 10:55:10.372585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.723 [2024-11-07 10:55:10.374263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:42.723 [2024-11-07 10:55:10.374385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:42.723 [2024-11-07 10:55:10.374493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:42.723 [2024-11-07 10:55:10.374493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 Malloc0 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 [2024-11-07 10:55:11.160489] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 [2024-11-07 10:55:11.188717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2841892 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:43.658 10:55:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:45.561 10:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2841845 00:26:45.561 10:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:45.561 Read completed with error (sct=0, sc=8) 00:26:45.561 starting I/O failed 00:26:45.561 Read completed with error (sct=0, sc=8) 00:26:45.561 starting I/O failed 00:26:45.561 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 [2024-11-07 10:55:13.214090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 [2024-11-07 10:55:13.214299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Read completed with error (sct=0, sc=8) 00:26:45.562 starting I/O failed 00:26:45.562 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 [2024-11-07 10:55:13.214499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Read completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 Write completed with error (sct=0, sc=8) 00:26:45.563 starting I/O failed 00:26:45.563 [2024-11-07 10:55:13.214700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:45.563 [2024-11-07 10:55:13.214903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.214925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.215028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.215038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.215250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.215490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.215688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.215719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.215833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.215864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.216787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.216994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.217119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.217271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.217424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.217590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.217830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.217863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.218040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.218072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.218324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.563 [2024-11-07 10:55:13.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.563 qpair failed and we were unable to recover it. 00:26:45.563 [2024-11-07 10:55:13.218513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.218526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.218609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.218619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.218739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.218770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.218956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.218988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.219128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.219161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.219282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.219295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.219442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.219476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.219677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.219710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.219903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.219935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.220946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.221866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.221878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.222884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.222901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.223045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.223061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.223214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.223230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.223306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.223321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.223458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.564 [2024-11-07 10:55:13.223475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.564 qpair failed and we were unable to recover it. 00:26:45.564 [2024-11-07 10:55:13.223665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.223681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.223784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.223799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.223938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.223954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.224121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.224137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.224227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.224242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.224392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.224446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.224583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.224617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.224795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.224828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.225081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.225328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.225362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.225545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.225580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.225767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.225801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.226024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.226291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.226588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.226605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.226704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.226720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.226945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.226961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.227050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.227064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.227230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.227246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.565 [2024-11-07 10:55:13.227399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.565 [2024-11-07 10:55:13.227419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.565 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.227596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.227613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.227791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.227807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.227963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.227979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.228923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.228957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.229230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.229264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.229399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.229443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.229642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.229666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.229875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.229909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.230051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.230084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.230335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.230368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.230517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.230552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.230806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.230837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.230974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.231005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.231186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.231218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.231395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.231427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.231546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.231576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.848 qpair failed and we were unable to recover it. 00:26:45.848 [2024-11-07 10:55:13.231765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.848 [2024-11-07 10:55:13.231796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.231926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.231957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.232125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.232137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.232358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.232370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.232552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.232585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.232714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.232750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.233027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.233061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.233240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.233272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.233408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.233465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.233747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.233779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.233953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.233969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.234040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.234055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.234207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.234223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.234449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.234484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.234754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.234788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.234930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.234947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.235944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.235958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.236104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.236121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.236290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.236306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.236467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.236502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.236683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.236716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.236950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.236983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.237098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.237130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.237262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.237296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.237513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.237529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.237771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.237804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.237985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.238024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.238274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.849 [2024-11-07 10:55:13.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.849 qpair failed and we were unable to recover it. 00:26:45.849 [2024-11-07 10:55:13.238596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.238630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.238774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.238808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.238989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.239022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.239218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.239251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.239358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.239392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.239525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.239560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.239738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.239771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.240790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.240824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.241091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.241125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.241359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.241375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.241533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.241550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.241640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.241656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.241906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.241944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.242215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.242249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.242506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.242523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.242691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.242706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.242859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.242875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.243027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.243043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.243231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.243247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.243455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.243472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.243679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.243698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.243889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.243905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.244138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.244154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.244389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.244405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.244503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.244519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.244705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.244738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.244942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.244975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.245252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.245269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.850 [2024-11-07 10:55:13.245415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.850 [2024-11-07 10:55:13.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.850 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.245589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.245724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.245757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.246057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.246195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.246314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.246472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.246759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.246968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.247137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.247428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.247614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.247785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.247949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.247983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.248109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.248126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.248225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.248241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.248344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.248360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.248584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.248620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.248851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.248885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.249163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.249197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.249390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.249425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.249627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.249661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.249936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.249969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.250157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.250173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.250322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.250355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.250622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.250657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.250883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.250917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.251129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.251162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.251298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.251332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.251462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.251496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.251633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.251667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.251933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.251967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.252240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.252272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.252540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.252611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.252940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.252989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.253212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.253246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.851 [2024-11-07 10:55:13.253387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.851 [2024-11-07 10:55:13.253418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.851 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.253616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.253648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.253822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.254095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.254127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.254263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.254296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.254571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.254604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.254800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.254832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.254965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.254997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.255170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.255201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.255371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.255403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.255556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.255580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.255682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.255697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.255850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.255866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.256792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.256807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.257899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.257933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.258128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.258161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.258281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.258313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.258484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.258495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.258703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.258736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.258857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.258889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.259068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.259101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.259225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.259257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.259431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.259475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.259660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.259691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.852 qpair failed and we were unable to recover it. 00:26:45.852 [2024-11-07 10:55:13.259890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.852 [2024-11-07 10:55:13.259922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.260062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.260095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.260223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.260255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.260490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.260526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.260796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.260829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.261828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.261857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.262067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.262100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.262352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.262385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.262526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.262543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.262752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.262785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.262915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.262947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.263129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.263167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.263292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.263325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.263586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.263620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.263817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.263849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.263981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.264013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.264201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.264233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.264538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.264572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.264694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.264920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.264953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.265148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.265182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.265312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.265344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.265528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.853 [2024-11-07 10:55:13.265561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.853 qpair failed and we were unable to recover it. 00:26:45.853 [2024-11-07 10:55:13.265757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.265789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.265983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.266015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.266258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.266273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.266427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.266448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.266684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.266717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.266848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.266880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.267087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.267120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.267309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.267324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.267409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.267456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.267569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.267601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.267791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.267823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.268003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.268035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.268170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.268203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.268402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.268455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.268644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.268659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.268840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.268926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.269141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.269178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.269305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.269338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.269465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.269482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.269693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.269726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.269902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.269935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.270113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.270147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.270326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.270358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.270480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.270515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.270736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.270770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.271934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.271948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.272108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.272140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.272335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.854 [2024-11-07 10:55:13.272368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.854 qpair failed and we were unable to recover it. 00:26:45.854 [2024-11-07 10:55:13.272502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.272536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.272641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.272657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.272798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.272813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.273951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.273984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.274226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.274260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.274464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.274499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.274691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.274973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.275005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.275228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.275366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.275398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.275600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.275634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.275845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.275877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.276001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.276034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.276229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.276262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.276373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.276405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.276691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.276921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.276960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.277086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.277119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.277385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.277418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.277680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.277714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.277903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.277935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.278116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.278148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.278346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.278379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.278634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.278668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.278857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.278890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.279013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.279241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.279273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.855 [2024-11-07 10:55:13.279573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.855 [2024-11-07 10:55:13.279607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.855 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.279873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.279907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.280096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.280127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.280326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.280359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.280582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.280617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.280799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.280832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.280959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.280992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.281169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.281201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.281344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.281376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.281630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.281800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.281833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.282078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.282111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.282284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.282317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.282499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.282532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.282800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.282834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.282973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.283192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.283406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.283561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.283792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.283933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.283966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.284231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.284265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.284460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.284477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.284558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.284573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.284786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.284802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.284905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.284920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.285906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.285939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.286149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.286181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.286385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.286418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.856 [2024-11-07 10:55:13.286608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.856 [2024-11-07 10:55:13.286624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.856 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.287055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.287088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.287273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.287305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.287480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.287514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.287641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.287674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.287866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.287898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.288165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.288197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.288374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.288407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.288551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.288584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.288835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.288869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.289112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.289145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.289341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.289373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.289567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.289602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.289846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.289879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.290748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.290781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.291002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.291035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.291284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.291317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.291458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.291493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.291637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.291653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.291874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.291907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.292035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.292270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.292302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.292485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.292501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.292865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.292897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.293079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.293110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.293384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.293418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.293561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.293595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.293785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.857 [2024-11-07 10:55:13.293818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.857 qpair failed and we were unable to recover it. 00:26:45.857 [2024-11-07 10:55:13.293930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.293969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.294235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.294266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.294456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.294500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.294658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.294832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.294864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.294992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.295024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.295202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.295234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.295446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.295463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.295616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.295648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.295828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.295861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.295980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.296013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.296217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.296249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.296447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.296481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.296665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.296698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.296815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.296848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.297030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.297064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.297255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.297287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.297482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.297631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.297665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.297870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.297902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.298026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.298059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.298242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.298275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.298469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.298485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.298718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.298751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.298911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.299151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.299184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.299321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.299353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.299606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.299641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.299910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.299943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.300141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.858 [2024-11-07 10:55:13.300173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.858 qpair failed and we were unable to recover it. 00:26:45.858 [2024-11-07 10:55:13.300365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.300397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.300630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.300663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.300937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.300970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.301875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.301890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.302094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.302262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.302636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.302865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.302977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.303010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.303226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.303242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.303328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.303342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.303510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.303526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.303696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.303729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.303995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.304871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.304904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.305036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.305068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.305312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.305345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.305588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.305632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.305771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.305787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.305958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.305990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.306114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.306358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.859 [2024-11-07 10:55:13.306373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.859 qpair failed and we were unable to recover it. 00:26:45.859 [2024-11-07 10:55:13.306517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.306534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.306747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.306780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.306922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.306953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.307162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.307197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.307301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.307316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.307475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.307509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.307708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.307742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.307876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.307908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.308125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.308158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.308291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.308324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.308478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.308512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.308770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.308803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.309011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.309043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.309279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.309295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.309466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.309500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.309762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.309796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.309991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.310038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.310254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.310287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.310478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.310495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.310655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.310689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.310932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.310965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.311236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.311494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.311510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.311662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.311678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.311822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.311854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.311988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.312020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.312289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.312334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.312497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.312513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.312600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.312633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.312880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.312912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.313108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.313141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.313266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.313356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.313370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.860 qpair failed and we were unable to recover it. 00:26:45.860 [2024-11-07 10:55:13.313467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.860 [2024-11-07 10:55:13.313482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.313635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.313651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.313814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.313847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.314122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.314156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.314332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.314347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.314578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.314612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.314881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.314915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.315179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.315211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.315330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.315362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.315551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.315586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.315785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.315801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.316014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.316047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.316225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.316257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.316490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.316506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.316661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.316677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.316858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.316891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.317069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.317102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.317280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.317313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.317573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.317590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.317742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.317757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.317925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.317958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.318149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.318182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.318327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.318360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.318534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.318574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.318779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.318812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.319005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.319038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.319243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.319275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.319466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.319482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.319652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.319685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.319920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.320135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.320168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.320326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.320341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.320508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.320542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.320687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.861 [2024-11-07 10:55:13.320720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.861 qpair failed and we were unable to recover it. 00:26:45.861 [2024-11-07 10:55:13.320938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.320970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.321186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.321219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.321412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.321454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.321647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.321684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.321839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.321855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.322019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.322052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.322318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.322350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.322483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.322517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.322755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.322968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.322984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.323243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.323282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.323408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.323452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.323564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.323602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.323703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.323718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.323857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.323896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.324151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.324184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.324453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.324488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.324671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.324830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.324846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.325010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.325025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.325285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.325301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.325455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.325471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.325633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.325665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.325860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.325893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.326008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.326040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.326299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.326316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.326388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.326402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.326665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.326699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.326879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.326912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.327191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.327230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.327415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.327457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.327679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.327712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.327931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.327964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.328160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.862 [2024-11-07 10:55:13.328192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.862 qpair failed and we were unable to recover it. 00:26:45.862 [2024-11-07 10:55:13.328381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.328398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.328508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.328581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.328595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.328860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.328893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.329023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.329056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.329184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.329216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.329407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.329630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.329666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.329840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.329872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.330052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.330086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.330354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.330388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.330594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.330628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.330868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.330883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.330967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.331952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.331967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.332059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.332074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.332192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.332383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.332473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.332718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.332756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.332947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.332991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.333217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.333382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.333398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.333505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.333522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.333752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.333768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.333874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.863 [2024-11-07 10:55:13.333907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.863 qpair failed and we were unable to recover it. 00:26:45.863 [2024-11-07 10:55:13.334032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.334867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.334882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.335050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.335083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.335331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.335365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.335481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.335514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.335628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.335661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.335785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.335817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.336089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.336123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.336299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.336331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.336454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.336488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.336659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.336676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.336837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.336870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.337050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.337083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.337261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.337294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.337423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.337491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.337715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.337748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.337864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.337898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.338024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.338058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.338177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.338209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.338387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.338420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.338608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.338641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.338833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.338865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.339093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.339245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.339462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.339618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.339787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.339983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.340021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.340215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.340247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.864 qpair failed and we were unable to recover it. 00:26:45.864 [2024-11-07 10:55:13.340473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.864 [2024-11-07 10:55:13.340511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.340602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.340617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.340758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.340774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.340928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.340943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.341900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.341934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.342129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.342162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.342417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.342460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.342564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.342579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.342730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.342746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.342979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.343013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.343257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.343290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.343470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.343503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.343627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.343642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.343801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.343817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.343997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.344177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.344334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.344575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.344779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.344911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.345138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.345317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.345350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.345546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.345581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.345802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.345835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.346077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.346110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.346338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.346371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.346564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.346596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.346772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.347016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.347049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.865 qpair failed and we were unable to recover it. 00:26:45.865 [2024-11-07 10:55:13.347217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.865 [2024-11-07 10:55:13.347233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.347377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.347394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.347494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.347509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.347681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.347825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.347858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.348051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.348084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.348197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.348230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.348411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.348453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.348708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.348741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.348853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.348886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.349954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.349970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.350216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.350426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.350470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.350604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.350642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.350784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.350800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.350900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.350915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.351940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.351954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.352058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.352073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.352281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.352314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.352559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.352594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.352711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.352742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.352853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.352869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.353029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.353046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.353323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.353355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.866 [2024-11-07 10:55:13.353491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.866 [2024-11-07 10:55:13.353524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.866 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.353704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.353736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.353933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.353966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.354096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.354128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.354392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.354425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.354618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.354651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.354803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.354947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.354979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.355246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.355278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.355399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.355431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.355677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.355694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.355768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.355785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.356025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.356058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.356284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.356316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.356469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.356511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.356596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.356611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.356767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.356801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.357003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.357035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.357167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.357200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.357488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.357628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.357661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.357850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.357883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.358023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.358057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.358271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.358304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.358503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.358536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.358731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.358750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.358826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.358840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.359084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.359118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.359234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.359268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.359520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.359554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.359742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.359758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.359933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.359949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.360223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.360255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.360419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.360558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.867 [2024-11-07 10:55:13.360591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.867 qpair failed and we were unable to recover it. 00:26:45.867 [2024-11-07 10:55:13.360875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.360908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.361114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.361146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.361335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.361370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.361562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.361603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.361798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.361831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.362082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.362113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.362321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.362362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.362514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.362530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.362741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.362772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.362898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.362931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.363054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.363087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.363205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.363238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.363376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.363408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.363616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.363649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.363830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.363846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.364934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.364950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.365093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.365125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.365344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.365379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.365604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.365638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.365876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.365892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.366035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.366248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.366444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.366682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.366714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.868 [2024-11-07 10:55:13.366906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.868 [2024-11-07 10:55:13.366922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.868 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.367028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.367058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.367201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.367215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.367411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.367463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.367665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.367699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.367900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.367933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.368119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.368152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.368401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.368432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.368648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.368680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.368903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.368935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.369070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.369102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.369283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.369315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.369503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.369536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.369730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.369742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.369946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.369987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.370235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.370266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.370512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.370524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.370686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.370719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.370853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.370885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.371901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.371999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.372848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.372995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.373006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.373195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.373228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.373427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.373473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.373724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.869 [2024-11-07 10:55:13.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.869 qpair failed and we were unable to recover it. 00:26:45.869 [2024-11-07 10:55:13.373927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.373938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.374893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.374930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.375953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.375968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.376909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.377102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.377146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.377310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.377554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.377589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.377728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.377975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.377991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.378084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.378117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.378297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.378330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.378579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.378613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.378827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.378843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.378985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.379018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.379195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.379229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.379352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.379386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.379658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.379674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.379853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.379886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.380110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.870 [2024-11-07 10:55:13.380143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.870 qpair failed and we were unable to recover it. 00:26:45.870 [2024-11-07 10:55:13.380271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.380304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.380447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.380482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.380625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.380642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.380818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.380834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.380970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.380987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.381168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.381184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.381352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.381385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.381543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.381560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.381790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.381824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.382007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.382039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.382244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.382277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.382398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.382444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.382719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.382757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.383713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.383746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.384016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.384049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.384243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.384275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.384489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.384522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.384650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.384661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.384859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.384891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.385169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.385202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.385343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.385380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.385580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.385613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.385802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.385814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.386019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.386053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.386322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.386362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.386527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.386538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.386692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.386724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.386846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.386877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.387067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.871 [2024-11-07 10:55:13.387097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.871 qpair failed and we were unable to recover it. 00:26:45.871 [2024-11-07 10:55:13.387368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.387401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.387514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.387537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.387765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.387797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.387928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.387959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.388160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.388192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.388386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.388418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.388573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.388604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.388854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.388888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.389019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.389050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.389243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.389275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.389393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.389424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.389642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.389675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.389919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.389952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.390149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.390181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.390320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.390353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.390532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.390566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.390821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.390854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.390987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.391018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.391217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.391248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.391538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.391571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.391771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.391783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.391930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.391941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.392902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.392935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.393068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.393101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.393229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.393260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.393459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.393493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.393674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.393706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.872 [2024-11-07 10:55:13.393813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.872 [2024-11-07 10:55:13.393824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.872 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.393893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.393914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.393993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.394971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.394981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.395954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.395986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.396954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.396986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.397182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.397216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.397394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.397426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.397688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.397702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.397862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.397893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.398087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.398119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.398256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.398290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.398432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.398476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.398721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.873 [2024-11-07 10:55:13.398754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.873 qpair failed and we were unable to recover it. 00:26:45.873 [2024-11-07 10:55:13.398871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.398903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.399100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.399133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.399327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.399359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.399540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.399573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.399708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.399747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.399877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.399889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.400884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.400895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.401082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.401251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.401674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.401994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.402169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.402348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.402531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.402751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.402909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.402921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.403078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.403089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.403166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.403177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.403324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.403354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.403622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.403633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.403784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.403795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.404009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.404301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.404333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.404568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.874 [2024-11-07 10:55:13.404580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.874 qpair failed and we were unable to recover it. 00:26:45.874 [2024-11-07 10:55:13.404726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.404764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.404883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.404914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.405904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.405915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.406109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.406140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.406398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.406430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.406560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.406592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.406729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.406950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.406961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.407186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.407198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.407406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.407417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.407638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.407650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.407822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.407833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.407929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.407941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.408031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.408041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.408289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.408487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.408519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.408642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.408673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.408862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.408894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.409038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.409259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.409290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.409483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.409516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.409801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.409835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.410122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.410134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.410296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.410329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.410521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.410555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.410749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.410788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.410916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.410927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.411006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.411016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.875 qpair failed and we were unable to recover it. 00:26:45.875 [2024-11-07 10:55:13.411108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.875 [2024-11-07 10:55:13.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.411337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.411368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.411623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.411656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.411778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.411811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.412903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.412915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.413917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.413950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.414127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.414160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.414416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.414460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.414730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.414762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.414959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.414971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.415126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.415138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.415333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.415345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.415493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.415527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.415652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.415680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.415878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.415889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.416919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.416931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.876 qpair failed and we were unable to recover it. 00:26:45.876 [2024-11-07 10:55:13.417008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.876 [2024-11-07 10:55:13.417018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.417841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.418891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.418901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.419032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.419044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.419266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.419491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.419525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.419653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.419694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.419837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.419848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.420917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.420949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.421136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.421169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.421396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.421547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.421580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.421777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.421809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.422001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.422034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.422293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.422518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.422552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.422677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.422708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.422954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.422985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.423178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.423210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.423411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.877 [2024-11-07 10:55:13.423454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.877 qpair failed and we were unable to recover it. 00:26:45.877 [2024-11-07 10:55:13.423644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.423655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.423899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.423932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.424128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.424160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.424356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.424388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.424518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.424551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.424685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.424717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.424902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.424913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.425056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.425088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.425354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.425387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.425601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.425633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.425791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.425803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.425919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.425952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.426074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.426105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.426370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.426402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.426597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2bb20 is same with the state(6) to be set 00:26:45.878 [2024-11-07 10:55:13.426869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.426906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.427027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.427063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.427331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.427402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.427571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.427607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.427788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.427819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.427965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.427997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.428950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.428961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.429041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.429052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.429210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.429242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.429380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.429689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.429723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.429924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.429957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.430201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.430233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.430473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.878 [2024-11-07 10:55:13.430715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.878 [2024-11-07 10:55:13.430749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.878 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.430992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.431146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.431315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.431471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.431646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.431902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.432093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.432317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.432366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.432558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.432592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.432726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.432757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.432973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.433139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.433296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.433510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.433804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.433965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.433997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.434128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.434161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.434339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.434370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.434489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.434708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.434740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.434916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.434959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.435144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.435157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.435359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.435391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.435650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.435684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.435945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.435957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.436159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.436190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.436460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.436493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.436608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.436620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.436808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.436840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.436952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.436983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.437221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.437252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.437497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.437530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.437644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.437655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.437863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.437895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.879 [2024-11-07 10:55:13.438034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.879 [2024-11-07 10:55:13.438071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.879 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.438183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.438215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.438500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.438686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.438718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.438970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.439002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.439188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.439221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.439412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.439464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.439642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.439673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.439864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.439895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.440126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.440138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.440281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.440292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.440529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.440541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.440730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.440885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.440917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.441128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.441239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.441271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.441532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.441566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.441685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.441717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.441922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.441953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.442129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.442161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.442381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.442413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.442562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.442594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.442779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.442812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.443026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.443057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.443232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.443265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.443452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.443486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.443615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.443648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.443917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.443948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.444765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.444798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.445069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.880 [2024-11-07 10:55:13.445102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.880 qpair failed and we were unable to recover it. 00:26:45.880 [2024-11-07 10:55:13.445276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.445308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.445441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.445475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.445694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.445725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.446932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.446965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.447241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.447273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.447410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.447451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.447632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.447664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.447852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.447884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.448077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.448108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.448314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.448347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.448466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.448499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.448703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.448735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.448914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.448947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.449126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.449158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.449299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.449331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.449470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.449504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.449687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.449699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.449875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.449906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.450084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.450115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.450305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.450336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.450526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.450559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.450756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.450788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.450919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.450951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.451133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.451164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.451352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.451384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.451578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.451611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.451797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.451829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.451947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.452064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.881 [2024-11-07 10:55:13.452074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.881 qpair failed and we were unable to recover it. 00:26:45.881 [2024-11-07 10:55:13.452157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.452168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.452264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.452294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.452538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.452571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.452748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.452760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.452908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.452939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.453061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.453093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.453279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.453311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.453513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.453545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.453668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.453701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.453818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.453850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.454049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.454081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.454323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.454361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.454557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.454590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.454775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.454808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.454986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.454997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.455128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.455139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.455316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.455349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.455529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.455564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.455713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.455745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.455865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.455898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.456027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.456059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.456254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.456286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.456469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.456504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.456622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.456656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.456875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.456907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.457159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.457192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.457315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.457348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.457574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.457607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.457789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.457821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.458038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.458071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.458250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.458456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.458638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.458670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.458880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.882 [2024-11-07 10:55:13.458892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.882 qpair failed and we were unable to recover it. 00:26:45.882 [2024-11-07 10:55:13.459682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.459730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.459837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.459848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.460002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.460013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.460145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.460156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.460363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.460398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.460601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.460634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.460766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.460800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.461847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.461879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.462071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.462083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.462176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.462188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.462406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.462449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.462595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.462634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.462883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.462916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.463098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.463110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.463248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.463260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.463387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.463420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.463637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.463670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.463862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.463894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.464932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.883 [2024-11-07 10:55:13.464944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.883 qpair failed and we were unable to recover it. 00:26:45.883 [2024-11-07 10:55:13.465048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.465060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.465270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.465303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.465418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.465461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.465642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.465675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.465848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.465993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.466025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.466272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.466305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.466564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.466598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.466773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.466785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.466963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.466996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.467950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.467961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.468966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.468978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.469120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.469148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.469311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.469343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.469477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.469511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.469623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.469657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.469904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.469943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.884 [2024-11-07 10:55:13.470609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.884 [2024-11-07 10:55:13.470620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.884 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.470682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.470693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.470824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.470834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.470906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.470917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.471892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.471924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.472864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.472898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.473877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.473889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.474113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.474144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.474259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.474292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.474471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.474506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.474644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.474676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.474854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.474888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.885 qpair failed and we were unable to recover it. 00:26:45.885 [2024-11-07 10:55:13.475947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.885 [2024-11-07 10:55:13.475959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.476965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.476986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.477855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.477889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.478805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.478817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.479898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.479930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.480055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.480087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.480270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.480303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.480414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.886 [2024-11-07 10:55:13.480455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.886 qpair failed and we were unable to recover it. 00:26:45.886 [2024-11-07 10:55:13.480661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.480694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.480815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.480853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.481860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.481891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.482012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.482045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.482167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.482199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.482411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.482472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.482585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.482617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.482866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.482898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.483086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.483118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.483260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.483292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.483513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.483547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.483801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.483833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.484865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.484877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.485074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.485086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.485245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.485278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.485469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.485502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.485713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.486932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.486964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.487091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.487125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.887 [2024-11-07 10:55:13.487306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.887 [2024-11-07 10:55:13.487337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.887 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.487454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.487490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.487685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.487995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.488859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.488979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.489914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.489931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.490918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.490950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.491081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.491114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.491242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.491275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.491507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.491756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.491788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:45.888 [2024-11-07 10:55:13.491976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.888 [2024-11-07 10:55:13.492021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:45.888 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.492177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.492227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.492486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.492521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.492752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.492770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.493838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.493852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.494854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.494889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.495108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.495141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.174 qpair failed and we were unable to recover it. 00:26:46.174 [2024-11-07 10:55:13.495261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.174 [2024-11-07 10:55:13.495294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.495510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.495545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.495751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.495782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.495963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.495995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.496118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.496207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.496376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.496538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.496787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.496997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.497029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.497289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.497321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.497457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.497492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.497677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.497711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.497910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.498119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.498152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.498327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.498361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.498483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.498517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.498773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.498806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.498991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.499846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.499991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.500003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.500102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.500114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.500257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.500270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.500356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.500367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.500442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.175 [2024-11-07 10:55:13.500454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.175 qpair failed and we were unable to recover it. 00:26:46.175 [2024-11-07 10:55:13.500537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.500548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.500621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.500632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.500712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.500722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.500865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.500942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.500953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.501131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.501163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.501376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.501409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.501542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.501581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.501703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.501736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.501844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.501878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.502077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.502110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.502230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.502261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.502550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.502584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.502697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.502730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.503899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.503931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.504957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.504985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.505128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.505139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.505286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.505299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.505444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.505491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.505736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.505768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.505905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.505938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.176 qpair failed and we were unable to recover it. 00:26:46.176 [2024-11-07 10:55:13.506067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.176 [2024-11-07 10:55:13.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.506925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.506937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.507151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.507183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.507387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.507419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.507556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.507589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.507702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.507744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.507946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.507958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.508848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.508990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.509015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.509206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.509235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.509417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.509455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.509664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.509694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.509922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.510901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.177 [2024-11-07 10:55:13.510912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.177 qpair failed and we were unable to recover it. 00:26:46.177 [2024-11-07 10:55:13.511055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.511084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.511206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.511236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.511381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.511413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.511676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.511706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.511933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.511965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.512185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.512215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.512393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.512423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.512554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.512586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.512725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.512756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.512934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.512964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.513287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.513357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.513542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.513560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.513719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.513733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.513908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.513923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.514887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.514898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.515855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.515866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.178 qpair failed and we were unable to recover it. 00:26:46.178 [2024-11-07 10:55:13.516039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.178 [2024-11-07 10:55:13.516072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.516267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.516298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.516419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.516464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.516604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.516638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.516759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.516793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.516991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.517907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.517918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.518837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.518871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.519088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.519104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.519257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.519274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.519423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.519450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.519689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.519722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.519925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.519958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.520151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.520184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.520314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.520346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.520535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.179 [2024-11-07 10:55:13.521030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.179 [2024-11-07 10:55:13.521062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.179 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.521184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.521216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.521473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.521507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.521708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.521740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.521934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.521973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.522269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.522302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.522496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.522530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.522670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.522703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.522844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.522860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.523088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.523120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.523346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.523380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.523652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.523685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.523944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.523977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.524093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.524108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.524354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.524386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.524590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.524624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.524832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.524864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.525001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.525034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.525184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.525219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.525321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.525337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.525573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.525606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.525854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.526066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.526099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.526283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.526316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.526491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.526526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.526716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.526749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.526995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.527028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.527217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.527252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.527380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.527607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.527642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.527766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.527800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.180 qpair failed and we were unable to recover it. 00:26:46.180 [2024-11-07 10:55:13.528026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.180 [2024-11-07 10:55:13.528065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.528173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.528190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.528336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.528379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.528518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.528554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.528825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.528858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.529034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.529050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.529202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.529237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.529486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.529520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.529733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.529767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.529876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.529909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.530108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.530141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.530326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.530360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.530497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.530532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.530646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.530679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.530892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.530925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.531048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.531082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.531309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.531342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.531590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.531624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.531774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.531807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.531998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.532029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.532165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.532198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.532384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.532419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.532616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.532649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.532844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.532877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.533058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.533092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.533330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.533346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.533565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.533582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.533748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.533767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.533930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.533963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.534094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.534128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.181 [2024-11-07 10:55:13.534270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.181 [2024-11-07 10:55:13.534303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.181 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.534426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.534469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.534695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.534728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.534949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.534982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.535227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.535244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.535353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.535369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.535537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.535553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.535655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.535688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.535801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.535833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.536965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.536979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.537131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.537164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.537384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.537516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.537549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.537672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.537705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.537888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.537920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.538961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.538978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.539139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.539154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.539233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.539247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.182 [2024-11-07 10:55:13.539380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.182 [2024-11-07 10:55:13.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.182 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.539538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.539555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.539692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.539802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.539893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.539909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.540091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.540124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.540267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.540299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.540519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.540553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.540759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.540793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.540982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.541842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.541993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.542935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.542950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.543764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.543780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.544003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.544036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.544232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.544264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.544385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.183 [2024-11-07 10:55:13.544420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.183 qpair failed and we were unable to recover it. 00:26:46.183 [2024-11-07 10:55:13.544625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.544658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.544849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.544921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.545836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.545869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.546070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.546102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.546247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.546277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.546546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.546561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.546707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.546722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.546888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.546921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.547118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.547150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.547373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.547416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.547539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.547572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.547704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.547736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.548851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.548866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.549844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.549877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.184 [2024-11-07 10:55:13.550060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.184 [2024-11-07 10:55:13.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.184 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.550216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.550246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.550379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.550411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.550550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.550583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.550809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.551051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.551084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.551330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.551363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.551477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.551510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.551689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.551721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.551915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.551946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.552128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.552144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.552380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.552413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.552614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.552647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.552766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.552797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.552943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.552975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.553956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.553970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.554145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.554177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.554294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.554324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.554468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.554509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.554699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.554731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.554863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.554878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.555039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.555054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.185 qpair failed and we were unable to recover it. 00:26:46.185 [2024-11-07 10:55:13.555133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.185 [2024-11-07 10:55:13.555146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.555242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.555258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.555336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.555350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.555441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.555456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.555605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.555645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.555773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.555804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.556050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.556082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.556202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.556233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.556318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.556331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.556522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.556556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.556779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.556811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.557926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.557957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.558079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.558111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.558233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.558265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.558408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.558447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.558659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.558691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.558886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.558917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.559091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.559122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.559312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.559345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.559535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.559748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.559782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.559962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.559993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.560179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.560210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.560402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.560417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.560527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.560544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.186 qpair failed and we were unable to recover it. 00:26:46.186 [2024-11-07 10:55:13.560710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.186 [2024-11-07 10:55:13.560750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.560934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.560966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.561171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.561204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.561324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.561340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.561497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.561513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.561752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.561768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.561925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.561956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.562097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.562129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.562333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.562365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.562556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.562591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.562709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.562742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.562930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.562963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.563190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.563222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.563486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.563502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.563600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.563759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.563774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.563862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.563875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.564946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.564978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.565153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.565186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.565431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.565733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.565765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.565911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.566173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.566205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.566419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.566459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.566604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.566636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.566821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.566852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.567044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.567081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.187 [2024-11-07 10:55:13.567203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.187 [2024-11-07 10:55:13.567235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.187 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.567518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.567553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.567746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.568046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.568078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.568282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.568298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.568396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.568427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.568634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.568666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.568786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.568818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.569266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.569483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.569704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.569860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.569984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.570137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.570279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.570519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.570789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.570950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.570966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.571088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.571121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.571300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.571333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.571466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.571499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.571771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.571803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.571933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.571965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.572149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.572180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.572330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.572344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.572515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.572549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.572841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.572874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.573122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.573155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.573375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.573409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.573547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.188 [2024-11-07 10:55:13.573580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.188 qpair failed and we were unable to recover it. 00:26:46.188 [2024-11-07 10:55:13.573757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.573790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.573925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.573956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.574419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.574463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.574585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.574617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.574799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.574830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.574949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.574986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.575194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.575210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.575394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.575484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.575500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.575669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.575712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.575908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.575940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.576069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.576101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.576359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.576391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.576606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.576639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.576752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.576783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.576967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.577010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.577166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.577182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.577300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.577372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.577563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.577636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.577878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.577915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.578106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.578139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.578364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.578399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.578552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.578588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.578730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.578765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.578902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.578935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.579114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.579283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.579317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.579613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.579647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.579857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.579890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.580071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.580088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.580270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.580304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.580479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.580656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.189 [2024-11-07 10:55:13.580688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.189 qpair failed and we were unable to recover it. 00:26:46.189 [2024-11-07 10:55:13.580941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.580974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.581234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.581516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.581552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.581688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.581720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.581860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.581892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.582964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.582998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.583187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.583218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.583349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.583382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.583517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.583550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.583684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.583717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.584038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.584248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.584509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.584677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.584827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.584990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.585192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.585225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.585372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.585404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.585573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.585647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.585882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.585920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.190 [2024-11-07 10:55:13.586042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.190 [2024-11-07 10:55:13.586075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.190 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.586251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.586285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.586460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.586477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.586649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.586683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.586887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.586920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.587053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.587087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.587235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.587267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.587466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.587499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.587636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.587669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.587871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.587904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.588185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.588202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.588343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.588359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.588539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.588556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.588653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.588683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.588869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.588901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.589089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.589309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.589328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.589487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.589520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.589653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.589687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.589916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.589950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.590131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.590165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.590375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.590408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.590561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.590594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.590778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.590810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.590931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.590948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.591093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.591109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.591364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.591404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.591538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.591573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.591698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.591731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.191 [2024-11-07 10:55:13.591975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.191 [2024-11-07 10:55:13.592008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.191 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.592205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.592240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.592374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.592407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.592610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.592643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.592833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.592867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.593133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.593166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.593442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.593477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.593614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.593647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.593787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.593821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.594002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.594036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.594281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.594314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.594506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.594541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.594748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.594781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.595911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.595926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.596874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.596891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.597848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.597880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.598062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.598095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.192 [2024-11-07 10:55:13.598219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.192 [2024-11-07 10:55:13.598253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.192 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.598445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.598462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.598645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.598662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.598805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.598821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.598983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.599143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.599284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.599498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.599734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.599907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.599940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.600189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.600221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.600442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.600476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.600590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.600622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.600739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.600772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.600975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.601009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.601141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.601172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.601355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.601371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.601555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.601590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.601726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.601759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.601981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.602162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.602285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.602470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.602652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.602808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.602841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.603038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.603072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.603248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.603264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.603360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.603375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.603604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.603621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.193 [2024-11-07 10:55:13.603878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.193 [2024-11-07 10:55:13.603894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.193 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.604052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.604068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.604229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.604246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.604407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.604423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.604696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.604729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.604913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.604951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.605903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.605937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.606128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.606161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.606361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.606394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.606659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.606693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.607903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.607917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.608011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.608027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.608251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.608267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.608427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.608448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.608605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.608620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.608832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.608848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.609054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.609070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.609228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.609243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.609410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.194 [2024-11-07 10:55:13.609427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.194 qpair failed and we were unable to recover it. 00:26:46.194 [2024-11-07 10:55:13.609537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.609570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.609689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.609721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.609844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.609876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.610007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.610040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.610281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.610298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.610440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.610463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.610642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.610685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.610843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.610877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.611955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.612070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.612102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.612212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.612251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.612493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.612510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.612757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.612790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.613046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.613079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.613296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.613330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.613577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.613611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.613735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.613768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.613886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.613919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.614112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.614145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.614394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.614410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.614622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.614638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.614868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.614883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.614975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.614990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.615260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.615293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.615443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.615480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.615624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.615656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.615833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.615866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.195 qpair failed and we were unable to recover it. 00:26:46.195 [2024-11-07 10:55:13.616058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.195 [2024-11-07 10:55:13.616093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.616283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.616580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.616598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.616748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.616764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.616964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.617166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.617198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.617327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.617360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.617494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.617522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.617688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.617718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.617898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.617912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.618082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.618095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.618265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.618277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.618459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.618492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.618671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.618703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.618826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.618857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.619106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.619140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.619268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.619301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.619507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.619519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.619707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.619740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.619920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.619952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.620172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.620382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.620469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.620648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.620803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.620998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.621031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.621230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.621262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.621451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.621487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.621617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.621649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.196 [2024-11-07 10:55:13.621824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.196 [2024-11-07 10:55:13.621857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.196 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.622152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.622184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.622408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.622420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.622601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.622635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.622846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.622880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.623933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.623943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.624877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.625914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.626039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.626072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.626266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.626300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.626495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.626528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.626772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.626805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.626920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.626952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.627152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.627184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.627306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.627339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.627531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.627565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.197 [2024-11-07 10:55:13.627755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.197 [2024-11-07 10:55:13.627786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.197 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.627913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.627945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.628949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.628981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.629175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.629208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.629333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.629365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.629571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.629803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.629835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.629964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.629998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.630282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.630315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.630438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.630451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.630687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.630722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.630846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.630879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.630989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.631023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.631240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.631273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.631456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.631489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.631621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.631654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.631881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.631915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.632111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.632144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.632269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.632300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.632546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.632580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.632863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.632897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.633969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.634146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.634180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.198 qpair failed and we were unable to recover it. 00:26:46.198 [2024-11-07 10:55:13.634287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.198 [2024-11-07 10:55:13.634309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.634392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.634402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.634523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.634534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.634612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.634624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.634782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.634822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.634956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.634989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.635137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.635491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.635773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.635865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.635996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.636948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.636982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.637231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.637266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.637462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.637474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.637679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.637711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.637850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.637883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.638971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.638983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.639190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.639222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.199 qpair failed and we were unable to recover it. 00:26:46.199 [2024-11-07 10:55:13.639408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.199 [2024-11-07 10:55:13.639452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.639570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.639602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.639879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.639911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.640085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.640117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.640251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.640283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.640466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.640492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.640631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.640671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.640938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.640971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.641908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.641919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.642067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.642098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.642363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.642396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.642697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.642710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.642850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.642889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.643805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.643815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.644032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.644064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.644186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.644220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.644404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.644446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.644716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.644748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.644873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.644905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.645095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.645128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.645249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.645282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.645493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.200 [2024-11-07 10:55:13.645527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.200 qpair failed and we were unable to recover it. 00:26:46.200 [2024-11-07 10:55:13.645708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.645740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.645915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.645946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.646126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.646299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.646539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.646632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.646874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.646980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.647832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.647990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.648132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.648242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.648418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.648598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.648907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.648939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.649071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.649083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.649179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.649192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.649401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.649442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.649569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.649602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.649798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.649830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.650010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.650043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.650166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.650180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.650314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.650326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.201 qpair failed and we were unable to recover it. 00:26:46.201 [2024-11-07 10:55:13.650402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.201 [2024-11-07 10:55:13.650413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.650499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.650510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.650694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.650725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.650945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.651127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.651159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.651401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.651443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.651671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.651703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.651813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.651845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.652022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.652053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.652233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.652266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.652464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.652498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.652631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.652663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.652866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.652898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.653894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.653926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.654950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.654981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.655171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.655183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.655313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.655324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.655483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.655639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.655671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.655957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.656118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.656150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.656271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.656283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.656367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.202 [2024-11-07 10:55:13.656378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.202 qpair failed and we were unable to recover it. 00:26:46.202 [2024-11-07 10:55:13.656532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.656566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.656823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.657817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.657860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.658959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.658990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.659246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.659279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.659408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.659420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.659677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.659709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.659835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.659868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.660124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.660156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.660335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.660367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.660536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.660548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.660687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.660698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.660912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.660923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.661017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.661029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.661249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.661281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.661469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.661504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.661620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.661652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.661848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.661880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.662060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.662092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.662300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.662332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.662464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.662476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.662562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.662573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.203 qpair failed and we were unable to recover it. 00:26:46.203 [2024-11-07 10:55:13.662770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.203 [2024-11-07 10:55:13.662782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.662928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.662939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.663071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.663083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.663326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.663338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.663494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.663506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.663667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.663700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.663873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.663905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.664928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.664939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.665141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.665173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.665362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.665397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.665538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.665572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.665719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.665753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.665891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.665924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.666861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.666893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.667111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.667360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.667505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.667651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.667811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.667986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.668018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.668204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.668236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.668369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.668402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.668583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.204 [2024-11-07 10:55:13.668595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.204 qpair failed and we were unable to recover it. 00:26:46.204 [2024-11-07 10:55:13.668677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.668689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.668889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.668901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.669871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.669888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.670054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.670088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.670302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.670337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.670526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.670567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.670755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.670769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.670865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.670899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.671890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.671927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.672042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.672075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.672246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.672438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.672450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.672623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.672655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.672766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.672798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.673901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.205 [2024-11-07 10:55:13.673935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.205 qpair failed and we were unable to recover it. 00:26:46.205 [2024-11-07 10:55:13.674064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.674218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.674388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.674638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.674886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.674898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.675890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.675923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.676129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.676162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.676381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.676533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.676545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.676681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.676723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.676944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.676976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.677940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.677973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.678265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.678298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.678412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.678458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.678637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.678668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.678795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.678830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.678963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.678994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.679189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.679201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.679409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.679637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.679670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.206 qpair failed and we were unable to recover it. 00:26:46.206 [2024-11-07 10:55:13.679858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.206 [2024-11-07 10:55:13.679891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.680865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.680877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.681888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.681919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.682029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.682062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.682270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.682302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.682461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.682502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.682751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.682784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.682916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.682947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.683142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.683296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.683457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.683587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.683833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.683977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.684009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.684201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.684235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.684443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.684456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.684563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.207 [2024-11-07 10:55:13.684576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.207 qpair failed and we were unable to recover it. 00:26:46.207 [2024-11-07 10:55:13.684732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.684745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.684942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.684954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.685159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.685191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.685317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.685350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.685550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.685584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.685730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.685762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.685885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.685919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.686117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.686149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.686410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.686452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.686686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.686719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.686904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.686936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.687982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.687994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.688219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.688251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.688456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.688489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.688611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.688651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.688789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.688801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.689886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.689998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.690155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.690372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.690552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.690764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.690867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.208 [2024-11-07 10:55:13.690899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.208 qpair failed and we were unable to recover it. 00:26:46.208 [2024-11-07 10:55:13.691090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.691124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.691374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.691496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.691510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.691687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.691699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.691923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.691936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.692901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.692934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.693047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.693078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.693277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.693309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.693487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.693522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.693708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.693741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.694009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.694041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.694231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.694264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.694446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.694482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.694741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.694774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.695043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.695075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.695205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.695237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.695454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.695487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.695665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.695697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.695825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.695857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.696103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.696134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.696330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.696363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.696633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.696667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.696809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.696841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.697086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.697119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.697316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.697348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.697615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.697656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.209 [2024-11-07 10:55:13.697792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.209 [2024-11-07 10:55:13.697803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.209 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.697878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.698823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.698965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.699798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.699808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.700920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.700951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.701244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.701277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.701402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.701443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.701622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.701656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.701764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.701796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.702000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.702032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.702228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.702262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.702386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.702418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.702627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.702660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.210 [2024-11-07 10:55:13.702809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.210 qpair failed and we were unable to recover it. 00:26:46.210 [2024-11-07 10:55:13.703052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.703285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.703427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.703629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.703729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.703881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.703913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.704119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.704399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.704580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.704748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.704854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.704978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.705010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.705188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.705220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.705464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.705499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.705627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.705659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.705925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.705958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.706960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.706992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.707897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.707929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.708040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.708072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.708344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.708377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.708531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.211 [2024-11-07 10:55:13.708649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.211 [2024-11-07 10:55:13.708683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.211 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.708883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.708917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.709137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.709148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.709256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.709268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.709507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.709752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.709785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.709972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.710003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.710131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.710162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.710274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.710306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.710500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.710533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.710806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.710838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.710970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.711003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.711121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.711152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.711372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.711405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.711551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.711584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.711772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.711804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.712980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.712992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.212 [2024-11-07 10:55:13.713973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.212 [2024-11-07 10:55:13.713984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.212 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.714964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.714974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.715201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.715213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.715285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.715316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.715544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.715672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.715821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.715853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.716898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.716910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.717920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.717931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.718928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.718960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.719145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.719177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.213 [2024-11-07 10:55:13.719355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.213 [2024-11-07 10:55:13.719388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.213 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.719673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.719708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.719912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.719945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.720932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.721183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.721402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.721531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.721678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.721792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.721988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.722020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.722132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.722164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.722407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.722449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.722565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.722597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.722804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.722836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.723968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.723980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.724135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.724168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.724298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.724331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.724489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.724502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.724729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.724741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.724922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.724955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.725065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.725097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.725213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.725246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.214 qpair failed and we were unable to recover it. 00:26:46.214 [2024-11-07 10:55:13.725511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.214 [2024-11-07 10:55:13.725523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.725666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.725820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.725853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.726806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.727892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.727925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.728140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.728353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.728525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.728699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.728847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.728981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.729202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.729416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.729648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.729809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.729970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.730818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.730997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.731030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.731286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.731318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.731422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.731436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.731523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.215 [2024-11-07 10:55:13.731534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.215 qpair failed and we were unable to recover it. 00:26:46.215 [2024-11-07 10:55:13.731749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.731782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.731959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.731993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.732186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.732220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.732331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.732342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.732479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.732492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.732642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.732674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.732798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.732830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.733925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.733936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.734885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.734917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.735094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.735127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.735375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.735408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.735690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.735702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.735792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.735804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.735979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.736151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.736314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.736458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.736685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.736866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.216 [2024-11-07 10:55:13.736899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.216 qpair failed and we were unable to recover it. 00:26:46.216 [2024-11-07 10:55:13.737038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.737266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.737300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.737507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.737541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.737663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.737695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.737830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.737862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.738846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.738876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.739003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.739036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.739225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.739257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.739482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.739517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.739760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.739772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.739854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.739886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.740958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.740990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.741166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.741198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.741320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.741352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.741541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.741576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.741755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.741789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.741905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.741937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.742075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.742107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.742280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.742319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.742520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.742552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.742804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.742836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.743044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.743077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.743192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.743223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.743414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.743467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.743606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.743639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.743835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.743867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.744061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.744094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.744211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.744242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.744442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.744477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.217 qpair failed and we were unable to recover it. 00:26:46.217 [2024-11-07 10:55:13.744628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.217 [2024-11-07 10:55:13.744661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.744839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.744871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.745069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.745101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.745234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.745278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.745413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.745425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.745584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.745616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.745799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.745832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.746921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.746955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.747932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.747998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.748009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.748207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.748220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.748414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.748459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.748702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.748734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.748917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.748952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.749078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.749110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.749234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.749268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.749461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.749494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.749667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.749701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.749890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.749921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.750910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.750943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.751056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.751087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.218 [2024-11-07 10:55:13.751210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.218 [2024-11-07 10:55:13.751241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.218 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.751431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.751601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.751633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.751814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.751847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.752035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.752067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.752315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.752539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.752551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.752658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.752671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.752883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.752895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.753058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.753070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.753182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.753219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.753413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.753453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.753564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.753596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.753797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.753830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.754948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.754960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.755202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.755274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.755475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.755549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.755786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.755803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.755898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.755914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.756114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.756283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.756412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.756661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.756831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.756973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.757006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.757251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.757285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.757403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.757443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.757581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.757778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.757820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.758034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.758067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.758206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.758242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.758375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.758417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.758654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.758726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.758882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.758918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.219 qpair failed and we were unable to recover it. 00:26:46.219 [2024-11-07 10:55:13.759140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.219 [2024-11-07 10:55:13.759173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.759319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.759354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.759546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.759581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.759720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.759867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.759884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.760955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.760966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.761969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.761980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.762979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.762991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.763864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.763897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.764009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.764042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.764228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.764261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.764389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.764400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.764605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.764617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.220 [2024-11-07 10:55:13.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.220 [2024-11-07 10:55:13.764770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.220 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.764890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.764924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.765060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.765093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.765216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.765248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.765368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.765400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.765579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.765611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.765924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.765959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.766859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.766869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.767005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.767052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.767242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.767274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.767397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.767429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.767636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.767668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.767844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.767877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.768012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.768045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.768256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.768288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.768536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.768569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.768815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.768853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.768976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.769212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.769365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.769547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.769733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.769914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.769947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.770139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.770172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.221 [2024-11-07 10:55:13.770483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.221 qpair failed and we were unable to recover it. 00:26:46.221 [2024-11-07 10:55:13.770627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.770662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.770860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.770893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.771023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.771239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.771271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.771520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.771676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.771689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.771893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.771925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.772112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.772143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.772266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.772297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.772414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.772457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.772649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.772681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.772857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.772889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.773040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.773179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.773211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.773329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.773363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.773550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.773584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.773812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.773844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.774045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.774078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.774274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.774308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.774491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.774525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.774637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.774670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.774859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.774891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.775068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.775099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.775314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.775346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.775614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.775627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.775744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.775777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.776022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.776055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.776268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.776300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.776424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.776464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.776652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.776684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.776881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.776914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.777056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.777093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.777288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.777321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.777495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.777507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.777737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.777770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.777957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.222 [2024-11-07 10:55:13.777989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.222 qpair failed and we were unable to recover it. 00:26:46.222 [2024-11-07 10:55:13.778098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.778131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.778536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.778569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.778695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.778728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.778896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.778909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.779938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.779972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.780145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.780352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.780384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.780590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.780602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.780709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.780850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.780894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.781005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.781037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.781226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.781260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.781373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.781404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.781521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.781532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.781765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.781798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.782044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.782075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.782258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.782291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.782481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.782515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.782706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.782739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.782927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.782938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.783788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.783820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.784008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.784041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.784155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.784187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.784418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.784579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.784616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.223 [2024-11-07 10:55:13.784797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.223 [2024-11-07 10:55:13.784809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.223 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.784895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.784906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.784993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.785131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.785301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.785506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.785696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.785913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.785945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.786212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.786245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.786443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.786476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.786676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.786689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.786853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.786886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.787813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.787846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.788954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.788967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.789821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.789856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.790089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.790120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.790296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.790328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.790442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.790472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.790620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.790632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.790858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.790890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.791158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.791190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.791321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.791354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.791471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.791506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.791682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.791714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.791904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.224 [2024-11-07 10:55:13.791943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.224 qpair failed and we were unable to recover it. 00:26:46.224 [2024-11-07 10:55:13.792119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.792276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.792502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.792745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.792823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.792953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.792986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.793117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.793149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.793270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.793302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.793504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.793537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.793682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.793714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.793833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.793867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.794091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.794123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.794257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.794290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.794486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.794520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.794774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.794806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.794934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.794968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.795163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.795196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.795395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.795428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.795632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.795665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.795854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.795886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.796061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.796073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.796245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.796278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.796402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.796446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.796662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.796695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.796953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.796965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.797112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.797125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.797307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.797344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.797533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.797613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.797773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.797809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.797951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.797984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.798106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.798140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.798447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.798482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.798681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.798985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.799198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.799353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.799588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.799793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.225 qpair failed and we were unable to recover it. 00:26:46.225 [2024-11-07 10:55:13.799960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.225 [2024-11-07 10:55:13.799993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.800259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.800301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.800468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.800502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.800646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.800679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.800843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.800988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.801021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.801151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.801167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.801327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.801343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.801503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.801520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.804690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.804767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.804948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.804988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.805185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.805218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.805401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.805447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.805700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.805735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.805915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.805927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.806038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.806071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.806262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.806294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.806412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.806452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.806726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.806759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.806978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.806990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.807903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.807935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.808058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.808092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.808224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.226 [2024-11-07 10:55:13.808259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.226 qpair failed and we were unable to recover it. 00:26:46.226 [2024-11-07 10:55:13.808457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.808490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.808738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.808770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.808880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.808891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.809126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.809158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.809402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.809717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.809750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.809997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.810029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.810192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.810376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.810409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.810551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.810584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.810801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.810848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.811075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.811094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.811257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.811311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.811444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.811478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.811673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.811709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.811845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.811878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.812892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.812976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.813007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.813143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.813175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.813422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.813471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.227 qpair failed and we were unable to recover it. 00:26:46.227 [2024-11-07 10:55:13.813678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.227 [2024-11-07 10:55:13.813711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.813883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.813896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.814007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.814019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.814083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.814094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.814154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.814165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.814305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.814315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.228 [2024-11-07 10:55:13.814463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.228 [2024-11-07 10:55:13.814482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.228 qpair failed and we were unable to recover it. 00:26:46.514 [2024-11-07 10:55:13.814625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-11-07 10:55:13.814645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-11-07 10:55:13.814759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-11-07 10:55:13.814794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-11-07 10:55:13.814984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.514 [2024-11-07 10:55:13.815015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.514 qpair failed and we were unable to recover it. 00:26:46.514 [2024-11-07 10:55:13.815146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.815181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.815399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.815417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.815522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.815537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.815691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.815706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.815817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.815832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.815986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.816869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.816886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.817099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.817114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.817206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.817220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.817397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.817429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.817557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.817591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.817729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.817767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.818879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.819128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.819162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.819288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.819321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.819449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.819483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.819674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.819706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.819901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.819935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.820124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.515 [2024-11-07 10:55:13.820157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.515 qpair failed and we were unable to recover it. 00:26:46.515 [2024-11-07 10:55:13.820325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.820358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.820656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.820691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.820823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.820856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.821071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.821103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.821277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.821309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.821426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.821474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.821745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.821777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.821913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.821946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.822119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.822152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.822373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.822600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.822612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.822690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.822702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.822937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.822996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.823186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.823222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.823421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.823468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.823669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.823685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.823798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.823831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.824927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.824959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.825206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.825238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.825482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.825531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.825715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.825732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.825894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.825928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.516 qpair failed and we were unable to recover it. 00:26:46.516 [2024-11-07 10:55:13.826066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.516 [2024-11-07 10:55:13.826098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.826294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.826327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.826507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.826540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.826664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.826698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.826889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.826922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.827165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.827183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.827335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.827351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.827558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.827574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.827815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.827848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.827959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.827991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.828203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.828236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.828447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.828481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.828688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.828855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.828885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.829064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.829098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.829236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.829268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.829398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.829431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.829560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.829591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.829780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.829814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.831960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.831975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.832242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.832283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.832482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.832517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.832765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.832797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.833041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.833057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.517 [2024-11-07 10:55:13.833210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.517 [2024-11-07 10:55:13.833227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.517 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.833377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.833410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.833607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.833640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.833831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.833864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.833979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.833995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.834156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.834173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.834264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.834382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.834398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.834617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.834651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.834851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.834883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.835002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.835034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.835221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.835254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.835523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.835560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.835748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.835780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.835904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.835934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.836116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.836150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.836348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.836380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.836525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.836557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.836753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.836785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.836987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.837018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.837207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.837239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.837414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.837456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.837710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.837744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.837877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.837910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.838054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.838207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.838353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.838580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.838807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.838996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.839028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.839278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.839311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.839517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.839549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.518 qpair failed and we were unable to recover it. 00:26:46.518 [2024-11-07 10:55:13.839736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.518 [2024-11-07 10:55:13.839768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.839886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.839900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.840067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.840127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.840309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.840392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.840590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.840660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.840875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.840911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.841104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.841121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.841355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.841389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.841532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.841568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.841842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.841876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.841998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.842157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.842311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.842475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.842689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.842912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.842929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.843849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.843988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.844956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.844988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.845180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.845212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.845414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.845473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.845604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.845637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.845743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.519 [2024-11-07 10:55:13.845774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.519 qpair failed and we were unable to recover it. 00:26:46.519 [2024-11-07 10:55:13.846046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.846873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.846884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.847069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.847278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.847441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.847669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.847842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.847968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.848175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.848323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.848480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.848699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.848853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.848865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.849798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.849831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.850101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.520 [2024-11-07 10:55:13.850132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.520 qpair failed and we were unable to recover it. 00:26:46.520 [2024-11-07 10:55:13.850253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.850287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.850531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.850564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.850686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.850719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.850872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.850904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.851951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.851962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.852227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.852260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.852431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.852477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.852606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.852640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.852838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.852870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.853133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.853165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.853361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.853394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.853684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.853719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.853831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.853863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.854060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.854094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.854219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.854250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.854456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.854490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.854668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.854706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.854786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.854797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.855872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.855904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.856046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.856058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.856122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.856132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.856401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.521 qpair failed and we were unable to recover it. 00:26:46.521 [2024-11-07 10:55:13.856596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.521 [2024-11-07 10:55:13.856630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.856893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.856906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.856982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.856994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.857849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.857881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.858063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.858075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.858224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.858256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.858456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.858491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.858723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.858763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.858922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.858934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.859929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.859941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.860118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.860359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.860464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.860675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.860829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.860986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.861139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.861298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.861593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.861803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.861888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.861901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.862048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.862081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.862326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.522 [2024-11-07 10:55:13.862602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.522 [2024-11-07 10:55:13.862635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.522 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.862740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.862753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.862860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.862872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.862966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.862978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.863125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.863137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.863265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.863299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.863568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.863602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.863742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.863774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.863958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.863991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.864105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.864139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.864308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.864340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.864483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.864517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.864745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.864778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.864922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.864956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.865138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.865170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.865353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.865387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.865536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.865569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.865785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.865818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.866020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.866162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.866395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.866634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.866851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.866982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.867131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.867237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.867396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.867558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.867788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.867821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.868008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.868020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.868104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.868115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.868193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.868203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.523 [2024-11-07 10:55:13.868409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.523 [2024-11-07 10:55:13.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.523 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.868573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.868607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.868780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.868812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.868992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.869092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.869309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.869469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.869699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.869938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.869970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.870212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.870362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.870582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.870675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.870832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.870992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.871004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.871172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.871205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.871463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.871497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.871616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.871649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.871850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.871883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.872071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.872083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.872219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.872240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.872430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.872485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.872708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.872741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.872911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.872922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.524 qpair failed and we were unable to recover it. 00:26:46.524 [2024-11-07 10:55:13.873816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.524 [2024-11-07 10:55:13.873850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.873974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.873986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.874107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.874263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.874448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.874655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.874888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.874998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.875100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.875418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.875580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.875740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.876010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.876022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.876177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.876210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.876383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.876563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.876602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.876807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.876839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.877026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.877058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.877249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.877283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.877473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.877505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.877703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.877715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.877893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.877926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.878132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.878322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.878356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.878515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.878549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.878795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.878828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.879025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.879299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.525 [2024-11-07 10:55:13.879333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.525 qpair failed and we were unable to recover it. 00:26:46.525 [2024-11-07 10:55:13.879532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.879566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.879773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.879785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.879882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.879894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.879954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.879965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.880071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.880081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.880239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.880488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.880522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.880703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.880736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.880936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.880967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.881958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.881970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.882883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.882895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.883102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.883134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.883314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.883346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.883537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.883571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.883815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.883848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.884102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.884134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.884386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.884418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.884655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.884697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.884823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.884855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.884970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.884983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.885108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.526 [2024-11-07 10:55:13.885120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.526 qpair failed and we were unable to recover it. 00:26:46.526 [2024-11-07 10:55:13.885210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.885222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.885298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.885308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.885418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.885558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.885590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.885838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.885870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.886072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.886103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.886295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.886538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.886570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.886709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.886741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.886924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.886958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.887091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.887124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.887264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.887297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.887478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.887489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.887695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.887728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.887914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.887946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.888956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.888988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.889098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.889238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.889272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.889509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.889580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.889749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.889820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.890088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.890265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.890346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.890519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.527 [2024-11-07 10:55:13.890728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.527 qpair failed and we were unable to recover it. 00:26:46.527 [2024-11-07 10:55:13.890854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.890886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.891886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.891925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.892868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.892901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.893959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.894239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.894390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.894422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.894556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.894589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.894720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.894946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.894958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.895112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.895145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.895407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.895638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.895670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.895919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.895953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.896086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.896098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.896192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.528 [2024-11-07 10:55:13.896203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.528 qpair failed and we were unable to recover it. 00:26:46.528 [2024-11-07 10:55:13.896352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.896365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.896438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.896449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.896673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.896705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.896833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.896874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.897916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.897931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.898938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.898981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.899187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.899202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.899428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.899472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.899616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.899648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.899832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.899875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.899969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.899989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.900981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.901013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.901194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.901227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.529 qpair failed and we were unable to recover it. 00:26:46.529 [2024-11-07 10:55:13.901348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.529 [2024-11-07 10:55:13.901381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.901577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.901611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.901889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.901922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.902130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.902162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.902458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.902743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.902776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.902963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.902996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.903199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.903233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.903406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.903451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.903932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.903964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.904955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.904966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.905894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.906073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.906301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.906471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.906681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.906851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.906970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.907002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.907118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.907150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.907321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.907333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.907472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.530 [2024-11-07 10:55:13.907485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.530 qpair failed and we were unable to recover it. 00:26:46.530 [2024-11-07 10:55:13.907565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.907576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.907662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.907674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.907882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.907914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.908111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.908142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.908267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.908299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.908506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.908539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.908736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.908768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.908947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.908979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.909154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.909186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.909403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.909468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.909666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.909698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.909813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.909845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.910021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.910055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.910259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.910291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.910430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.910475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.910664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.910698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.910841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.910853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.911078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.911112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.911292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.911325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.911507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.911541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.911657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.911690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.911889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.911901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.912071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.912108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.912275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.912292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.912453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.912487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.912663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.912695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.912936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.912967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.531 qpair failed and we were unable to recover it. 00:26:46.531 [2024-11-07 10:55:13.913180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.531 [2024-11-07 10:55:13.913211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.913351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.913382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.913517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.913549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.913820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.913854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.914914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.914947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.915181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.915222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.915423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.915470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.915582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.915617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.915813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.915846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.916166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.916199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.916341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.916373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.916515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.916550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.916713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.916746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.916993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.917901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.917913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.918116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.918128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.918225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.918236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.918402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.918596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.918630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.918816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.918848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.919030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.919047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.919209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.919240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.532 [2024-11-07 10:55:13.919371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.532 [2024-11-07 10:55:13.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.532 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.919555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.919592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.919721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.919761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.919919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.919935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.920021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.920036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.920181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.920217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.920474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.920507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.920687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.920726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.920857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.920869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.921067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.921099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.921352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.921385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.921532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.921566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.921755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.921787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.921920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.921952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.922072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.922106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.922350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.922382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.922527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.922561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.922679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.922712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.922903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.922936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.923121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.923133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.923332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.923365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.923608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.923644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.923827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.923859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.923970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.924005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.924205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.924238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.924428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.924492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.924686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.924719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.924853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.924894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.925055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.925067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.925246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.925283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.925406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.533 [2024-11-07 10:55:13.925448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.533 qpair failed and we were unable to recover it. 00:26:46.533 [2024-11-07 10:55:13.925584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.925618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.925741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.925773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.925914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.925947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.926145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.926160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.926322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.926359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.926577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.926610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.926826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.926859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.926983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.927016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.927195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.927226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.927402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.927443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.927647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.927681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.927902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.928139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.928255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.928268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.928559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.928592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.928782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.928815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.928926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.928960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.929155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.929166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.929245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.929256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.929493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.929527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.929741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.929774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.929945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.929957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.930108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.930119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.930305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.930338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.930477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.930510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.930742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.930928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.930961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.931147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.931180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.931305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.534 [2024-11-07 10:55:13.931338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.534 qpair failed and we were unable to recover it. 00:26:46.534 [2024-11-07 10:55:13.931530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.931562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.931685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.931717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.931844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.931856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.932015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.932048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.932236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.932269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.932379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.932411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.932538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.932570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.932824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.932858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.933921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.933932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.934970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.934983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.935126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.935159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.935373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.935405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.935556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.935589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.935708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.935747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.935844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.936078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.936111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.936290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.936323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.936474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.936509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.936753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.936909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.936940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.535 qpair failed and we were unable to recover it. 00:26:46.535 [2024-11-07 10:55:13.937142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.535 [2024-11-07 10:55:13.937180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.937335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.937347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.937424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.937438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.937522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.937533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.937684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.937724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.937854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.937886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.938045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.938267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.938493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.938647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.938877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.938989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.939201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.939360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.939644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.939802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.939947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.939959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.940120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.940268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.940307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.940549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.940583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.940702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.940735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.940847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.940879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.941103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.941136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.941403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.941455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.941641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.941675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.941799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.941811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.941901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.941912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.942142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.942175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.942412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.942555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.942589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.942833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.942845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.536 qpair failed and we were unable to recover it. 00:26:46.536 [2024-11-07 10:55:13.942931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.536 [2024-11-07 10:55:13.942942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.943088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.943101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.943240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.943252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.943484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.943517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.943642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.943674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.943891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.943923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.944088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.944214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.944246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.944388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.944420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.944614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.944648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.944831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.944864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.945047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.945079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.945197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.945230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.945499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.945532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.945780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.945792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.945892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.946953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.947156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.947188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.947305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.947337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.947591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.947623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.947759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.947793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.947912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.947945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.537 [2024-11-07 10:55:13.948133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.537 [2024-11-07 10:55:13.948172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.537 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.948360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.948392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.948592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.948626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.948777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.948986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.949873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.949997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.950931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.950942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.951080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.951092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.951344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.951376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.951564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.951597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.951883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.951894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.951990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.952023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.952208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.952240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.952522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.952555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.952798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.952832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.952964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.952976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.953120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.953132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.953229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.953239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.953445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.953479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.953664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.953696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.953887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.953919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.538 qpair failed and we were unable to recover it. 00:26:46.538 [2024-11-07 10:55:13.954042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.538 [2024-11-07 10:55:13.954055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.954873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.954905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.955029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.955062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.955257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.955268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.955495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.955532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.955717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.955749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.955972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.956004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.956114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.956146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.956267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.956300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.956588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.956621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.956799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.956831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.957021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.957053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.957232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.957268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.957466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.957478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.957580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.957611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.957840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.957872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.958085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.958117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.958359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.958391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.958540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.958573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.958768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.958800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.958947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.958979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.959156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.959189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.959425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.959441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.959688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.539 qpair failed and we were unable to recover it. 00:26:46.539 [2024-11-07 10:55:13.959929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.539 [2024-11-07 10:55:13.959961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.960142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.960175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.960426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.960469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.960650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.960682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.960865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.960898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.961094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.961126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.961333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.961365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.961652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.961686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.961800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.961832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.961944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.961987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.962144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.962156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.962401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.962445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.962629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.962662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.962801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.962833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.963031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.963063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.963271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.963283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.963430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.963471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.963746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.963785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.963944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.963957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.964955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.964985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.965192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.540 [2024-11-07 10:55:13.965225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.540 qpair failed and we were unable to recover it. 00:26:46.540 [2024-11-07 10:55:13.965380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.965476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.965623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.965770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.965856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.965940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.965951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.966081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.966187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.966198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.966391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.966423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.966620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.966653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.966791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.966824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.967065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.967097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.967342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.967376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.967559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.967592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.967728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.967760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.967958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.967990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.968163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.968195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.968411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.968455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.968701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.968772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.969828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.969989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.970006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.970165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.970198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.970453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.970489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.970614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.970646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.970834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.970867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.971060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.971097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.971193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.971221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.971364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.541 [2024-11-07 10:55:13.971380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.541 qpair failed and we were unable to recover it. 00:26:46.541 [2024-11-07 10:55:13.971547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.971561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.971669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.971703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.971946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.971979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.972921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.972934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.973924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.973999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.974414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.974572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.974792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.974960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.974993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.975155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.975167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.975244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.975255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.975416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.542 [2024-11-07 10:55:13.975429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.542 qpair failed and we were unable to recover it. 00:26:46.542 [2024-11-07 10:55:13.975521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.975532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.975746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.975818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.976009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.976081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.976389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.976424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.976650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.976683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.976890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.976923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.977115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.977131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.977296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.977331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.977528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.977561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.977693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.977725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.977850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.977882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.978981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.978993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.979119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.979130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.979264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.979276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.979505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.979539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.979650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.979682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.979869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.979901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.980088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.980120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.980316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.980348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.980487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.980521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.980646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.980678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.980833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.981011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.981043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.981246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.981279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.981413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.543 [2024-11-07 10:55:13.981425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.543 qpair failed and we were unable to recover it. 00:26:46.543 [2024-11-07 10:55:13.981517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.981528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.981660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.981672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.981890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.981923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.982102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.982135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.982329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.982360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.982515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.982711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.982745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.982933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.982965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.983152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.983164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.983308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.983320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.983479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.983490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.983644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.983666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.983770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.983813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.984000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.984033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.984385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.984418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.984617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.984651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.984865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.985875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.985907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.986083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.986119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.986344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.986505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.986538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.986758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.986775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.986922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.986938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.987084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.544 [2024-11-07 10:55:13.987117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.544 qpair failed and we were unable to recover it. 00:26:46.544 [2024-11-07 10:55:13.987261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.987294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.987430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.987474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.987677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.987710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.987941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.988085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.988118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.988228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.988243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.988481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.988498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.988719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.988753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.988862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.988902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.989035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.989070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.989262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.989297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.989490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.989525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.989647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.989681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.989865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.989900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.990929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.990961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.991139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.991155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.991238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.991252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.991351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.991367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.991539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.991572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.991797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.991830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.992040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.992073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.992252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.992269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.992425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.992477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.992661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.992694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.992825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.992860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.993078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.993110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.993242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.993276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.993506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.993547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.545 [2024-11-07 10:55:13.993838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.545 [2024-11-07 10:55:13.993871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.545 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.994065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.994098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.994338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.994357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.994541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.994558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.994714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.994746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.994923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.994955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.995103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.995135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.995259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.995292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.995519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.995774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.995806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.995992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.996168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.996268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.996465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.996701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.996866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.996899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.997914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.997947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.998125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.998158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.998348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.998380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.998588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.998622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.998758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.998791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.999001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.999228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.999261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.546 [2024-11-07 10:55:13.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.546 [2024-11-07 10:55:13.999528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.546 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:13.999681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:13.999700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:13.999784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:13.999819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.000014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.000048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.000164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.000197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.000431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.000474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.000676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.000710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.000954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.000987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.001206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.001240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.001466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.001500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.001710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.001742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.001956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.001990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.002131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.002163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.002287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.002321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.002518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.002552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.002821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.002894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.003129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.003158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.003314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.003350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.003620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.003656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.003953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.003985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.004984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.004995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.547 [2024-11-07 10:55:14.005856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.547 qpair failed and we were unable to recover it. 00:26:46.547 [2024-11-07 10:55:14.005936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.005947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.006028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.006039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.006136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.006167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.006376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.006410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.006666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.006699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.006840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.006873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.007034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.007067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.007199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.007231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.007478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.007512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.007713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.007746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.007958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.007989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.008245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.008400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.008560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.008711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.008810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.008994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.009027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.009231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.009264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.009406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.009446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.009638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.009670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.009853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.009887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.010008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.010041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.010244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.010287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.010524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.548 [2024-11-07 10:55:14.010708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.548 [2024-11-07 10:55:14.010742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.548 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.010857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.010889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.011897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.011939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.012117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.012149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.012276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.012309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.012566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.012578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.012745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.012784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.013071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.013213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.013397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.013680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.013836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.013982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.014154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.014422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.014587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.014798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.014951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.014962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.015818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.015851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.016050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.016083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.549 [2024-11-07 10:55:14.016292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.549 [2024-11-07 10:55:14.016325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.549 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.016439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.016452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.016696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.016730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.016918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.016953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.017145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.017178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.017291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.017325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.017583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.017617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.017875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.017913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.018073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.018091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.018211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.018246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.018453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.018488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.018699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.018940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.018974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.019154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.019187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.019448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.019616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.019648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.019779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.019814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.020933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.021151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.021236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.021277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.021461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.021497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.021678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.021712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.021950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.021983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.022176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.022209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.022353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.550 [2024-11-07 10:55:14.022387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.550 qpair failed and we were unable to recover it. 00:26:46.550 [2024-11-07 10:55:14.022637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.022669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.022918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.022951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.023068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.023103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.023335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.023579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.023889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.023922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.024830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.024863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.025006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.025040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.025172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.025206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.025481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.025517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.025791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.025824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.026027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.026101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.026322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.026360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.026581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.026619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.026833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.027913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.027946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.028961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.028975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.029156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.551 [2024-11-07 10:55:14.029173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.551 qpair failed and we were unable to recover it. 00:26:46.551 [2024-11-07 10:55:14.029270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.029314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.029539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.029576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.029704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.029738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.029914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.029947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.030198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.030240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.030329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.030534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.030568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.030756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.030791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.030908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.030941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.031160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.031495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.031765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.031946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.031978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.032101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.032134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.032315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.032348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.032519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.032685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.032719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.032902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.032935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.033122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.033138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.033353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.033386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.033596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.033630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.033820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.033855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.034943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.034977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.035095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.552 [2024-11-07 10:55:14.035132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.552 qpair failed and we were unable to recover it. 00:26:46.552 [2024-11-07 10:55:14.035320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.035337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.035411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.035425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.035692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.035725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.035915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.036120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.036135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.036286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.036327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.036633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.036668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.036802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.036834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.036991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.037032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.037223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.037257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.037504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.037538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.037781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.037815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.037981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.038775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.038791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.039862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.039892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.040062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.040079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.040224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.040240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.040470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.040487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.040654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.040671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.040895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.040930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.041046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.041062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.041200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.041217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.041337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.553 [2024-11-07 10:55:14.041369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.553 qpair failed and we were unable to recover it. 00:26:46.553 [2024-11-07 10:55:14.041566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.041601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.041742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.041776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.041909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.041942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.042137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.042169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.042364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.042399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.042588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.042604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.042708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.042750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.042912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.042959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.043287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.043340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.044317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.044346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.044576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.044614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.044759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.044792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.045038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.045071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.045341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.045375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.045581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.045615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.045906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.045939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.046130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.046164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.046299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.046333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.046541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.046575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.046711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.046745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.046932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.046965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.047156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.047392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.047425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.047626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.047642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.047872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.047888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.047979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.554 [2024-11-07 10:55:14.048021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.554 qpair failed and we were unable to recover it. 00:26:46.554 [2024-11-07 10:55:14.048131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.048163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.048269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.048302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.048431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.048503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.048726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.048760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.048899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.048931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.049119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.049153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.049296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.049328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.049535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.049552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.049661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.049696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.049848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.049881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.050909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.050940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.051129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.051162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.051304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.051337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.051469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.051485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.051570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.051584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.052591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.052622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.052875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.052912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.053967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.053998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.054202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.054234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.054363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.054397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.054534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.054568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.054779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.054795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.555 [2024-11-07 10:55:14.054895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.555 [2024-11-07 10:55:14.054925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.555 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.055133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.055167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.055303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.055335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.055536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.055570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.055849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.055883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.056095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.056110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.056269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.056301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.056544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.056656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.056686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.056800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.056832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.057012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.057046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.057258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.057291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.057534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.057568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.057862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.057894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.058965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.058981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.059876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.059909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.556 [2024-11-07 10:55:14.060901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.556 [2024-11-07 10:55:14.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.556 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.060999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.061833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.061970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.062002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.062132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.062166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.062343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.062378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.062647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.062686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.062817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.062849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.062985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.063019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.063239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.063272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.063458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.063562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.063578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.063739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.063786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.063999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.064032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.064166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.064200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.064405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.064448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.064657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.064691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.064878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.064911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.065840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.065874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.066714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.066736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.066894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.066907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.557 [2024-11-07 10:55:14.067009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.557 [2024-11-07 10:55:14.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.557 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.067104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.067114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.067270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.067303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.067504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.067539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.067656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.067688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.067881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.067914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.068156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.068187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.068318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.068351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.068593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.068605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.068772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.068934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.068967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.069915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.069927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.070878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.070910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.071018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.071052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.071179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.071210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.071321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.558 [2024-11-07 10:55:14.071353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.558 qpair failed and we were unable to recover it. 00:26:46.558 [2024-11-07 10:55:14.071564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.071577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.071685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.071718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.071838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.071872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.071988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.072022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.072130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.072163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.072284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.072296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.072382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.072393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.073588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.073608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.073873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.073886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.074147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.074180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.074330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.074363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.074502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.074537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.074658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.074838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.074870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.075115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.075343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.075376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.075580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.075616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.075796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.075829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.075996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.076189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.076222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.076370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.076403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.076560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.076635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.076912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.076949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.077151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.077185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.077314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.077348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.077467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.077502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.077629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.077645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.077834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.077869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.078000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.078032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.559 [2024-11-07 10:55:14.078216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.559 [2024-11-07 10:55:14.078258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.559 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.078460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.078479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.078556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.078571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.078683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.078768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.078783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.078880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.078895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.079874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.079918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.080065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.080098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.080240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.080273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.080407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.080454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.080649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.080681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.080867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.080901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.081037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.081070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.081199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.081231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.081402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.081589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.081625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.081790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.082934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.082967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.083158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.083193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.083380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.083413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.083603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.083635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.084605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.084634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.084818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.084835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.084932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.560 [2024-11-07 10:55:14.084949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.560 qpair failed and we were unable to recover it. 00:26:46.560 [2024-11-07 10:55:14.085177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.085277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.085385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.085595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.085752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.085918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.085951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.086132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.086205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.086393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.086431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.086626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.086639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.086880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.086892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.087023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.087034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.087125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.087137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.087297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.087308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.087933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.087955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.088843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.088884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.089018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.089050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.089241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.089274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.089386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.089420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.089711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.089744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.089931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.089943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.090108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.090120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.090219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.090233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.090453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.090489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.090706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.090739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.090919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.090953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.091080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.091112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.091243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.091255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.091329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.091340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.091425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.091470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.561 qpair failed and we were unable to recover it. 00:26:46.561 [2024-11-07 10:55:14.091603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.561 [2024-11-07 10:55:14.091635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.091771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.091804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.092926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.092958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.093153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.093255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.093346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.093527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.093756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.093966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.094268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.094444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.094667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.094763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.094905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.094938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.095056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.095089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.095210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.095244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.095432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.095478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.095724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.095758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.095880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.095914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.096166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.096200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.096327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.096361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.096589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.096724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.096757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.096868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.096902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.097129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.097162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.097299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.097332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.097532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.097567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.097710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.097745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.097872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.097904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.098111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.098288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.098304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.098388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.562 [2024-11-07 10:55:14.098403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.562 qpair failed and we were unable to recover it. 00:26:46.562 [2024-11-07 10:55:14.098494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.098510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.098716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.098732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.098895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.098914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.099945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.099958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.100925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.100935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.101984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.101994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.563 [2024-11-07 10:55:14.102725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.563 [2024-11-07 10:55:14.102735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.563 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.102823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.102834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.103923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.103935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.104979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.104991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.105918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.105993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.106164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.106383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.106626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.106747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.106902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.106915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.107051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.107064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.107290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.107302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.107467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.107501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.564 [2024-11-07 10:55:14.107710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.564 [2024-11-07 10:55:14.107741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.564 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.107938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.107970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.108973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.108985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.109242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.109456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.109546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.109683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.109843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.109994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.110820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.110830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.111847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.111879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.112889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.112899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.113055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.113088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.113336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.113369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.113635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.113669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.113786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.113819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.565 qpair failed and we were unable to recover it. 00:26:46.565 [2024-11-07 10:55:14.113928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.565 [2024-11-07 10:55:14.113961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.114086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.114119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.114349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.114380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.114582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.114616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.114825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.114858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.114974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.114985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.115881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.115893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.116037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.116292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.116324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.116528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.116540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.116646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.116658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.116808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.117036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.117069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.117270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.117303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.117564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.117576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.117653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.117666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.117812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.117824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.118027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.118060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.118373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.118406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.118593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.118625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.118768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.118802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.118981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.119199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.119478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.119635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.119800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.119956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.119988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.120211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.120491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.120503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.120683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.120716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.120847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.120882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.121143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.121176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.121430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.121448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.121602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.121614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.566 qpair failed and we were unable to recover it. 00:26:46.566 [2024-11-07 10:55:14.121701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.566 [2024-11-07 10:55:14.121713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.121801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.121813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.121948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.121960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.122034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.122048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.122181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.122193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.122403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.122447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.122588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.122620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.122813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.122846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.123018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.123052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.123331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.123363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.123564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.123600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.123837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.123882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.124046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.124181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.124214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.124475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.124509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.124645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.124677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.124950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.124961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.125062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.125075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.125860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.125884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.126115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.126151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.126415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.126462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.126651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.126684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.126889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.126922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.127918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.127950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.128210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.128416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.128579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.128794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.128901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.128993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.129006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.129194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.129206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.129386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.129560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.129594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.129799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.129871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.130104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.130140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.567 qpair failed and we were unable to recover it. 00:26:46.567 [2024-11-07 10:55:14.130341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.567 [2024-11-07 10:55:14.130375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.130570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.130731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.130762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.130880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.130912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.131119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.131150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.131567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.131583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.132601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.132628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.132798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.132814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.132976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.132992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.133211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.133378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.133398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.133574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.133609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.133844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.133876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.134032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.134064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.134259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.134291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.134511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.134544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.134740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.134773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.134969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.135002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.135200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.135233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.135410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.135426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.135613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.135647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.135836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.135868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.136079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.136112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.136331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.136363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.136568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.136584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.136756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.136836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.136851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.137034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.137073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.137364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.137397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.137586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.137624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.137754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.137786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.137929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.137961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.138272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.138305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.138515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.138550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.138735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.138767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.138950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.138982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.139246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.139278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-11-07 10:55:14.139507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-11-07 10:55:14.139544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.139886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.139899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.139985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.139996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.140147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.140180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.140370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.140403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.140600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.140636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.140772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.140788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.141005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.141038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.141239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.141271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.141479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.141511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.141709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.141741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.141934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.141967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.142177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.142222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.142465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.142482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.142561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.142576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.142664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.142678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.142837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.142853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.143867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.143882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.144144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.144176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.144363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.144395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.144562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-11-07 10:55:14.144599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-11-07 10:55:14.144754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.144766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.144924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.144958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.145914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.145928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.146219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.146251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.146500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.146534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.146650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.146816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.146847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.147199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.147272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.147507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.147552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.147804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.147839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.147986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.148020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.148211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.148245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.148519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.148745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.148762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.148848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.148863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.149077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.149110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.149409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.149636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.149653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.149862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.149878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.150143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.150159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.150299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.150552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.150586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.150861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.150894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.151224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.151257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.151562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.151578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.151736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.151769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.152009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.152025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.152118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.152133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-11-07 10:55:14.152359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-11-07 10:55:14.152375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.152557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.152573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.152763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.152779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.152880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.152897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.152994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.153240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.153397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.153544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.153717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.153889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.153921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-11-07 10:55:14.154190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-11-07 10:55:14.154222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.154334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.154558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.154591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.154728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.154764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.154993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.155166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.155391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.155620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.155724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.155972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.155988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.156261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.156277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.156368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.156607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.156623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.156711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.156726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.156900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.156916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.157837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.157870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.158028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.158060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.158304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.158338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.158487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.853 [2024-11-07 10:55:14.158597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.853 [2024-11-07 10:55:14.158610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.853 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.158810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.158822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.159110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.159121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.159283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.159487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.159521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.159744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.159778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.159925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.159958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.160204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.160478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.160589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.160761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.160855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.160991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.161169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.161180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.161368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.161400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.161624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.161658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.161861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.162040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.162074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.162330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.162363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.162555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.162591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.162789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.162823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.163032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.163065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.163187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.163220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.163489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.163524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.163673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.163705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.163951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.163982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.164220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.164256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.164467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.164501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.164650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.164666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.164758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.164773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.164857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.164871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.165019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.165054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.165235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.165267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.165539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.165573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.854 [2024-11-07 10:55:14.165692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.854 [2024-11-07 10:55:14.165708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.854 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.165903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.165936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.166085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.166119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.166342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.166376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.166593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.166609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.166722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.166756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.166907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.166940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.167133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.167166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.167343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.167375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.167594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.167627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.167770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.167802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.167954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.168245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.168278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.168487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.168662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.168694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.168912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.168946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.169113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.169297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.169393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.169556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.169988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.170021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.170228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.170443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.170456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.170614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.170646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.170841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.170874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.171146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.171179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.171401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.171625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.171756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.171771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.171888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.171902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.172009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.172026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.172258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.172291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.172453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.172489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.172690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.855 [2024-11-07 10:55:14.172734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.855 qpair failed and we were unable to recover it. 00:26:46.855 [2024-11-07 10:55:14.172910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.172927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.173176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.173210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.173493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.173635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.173652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.173751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.173765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.173909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.173925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.174145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.174178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.174492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.174688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.174721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.174900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.175084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.175325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.175361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.175578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.175611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.175823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.175856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.175983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.176016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.176151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.176182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.176374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.176408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.176567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.176600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.176884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.177054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.177088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.177336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.177542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.177576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.177782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.177814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.177932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.177964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.178143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.178184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.856 [2024-11-07 10:55:14.178398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.856 [2024-11-07 10:55:14.178431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.856 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.178696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.178729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.179021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.179055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.179336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.179482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.179493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.179733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.179766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.180953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.180964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.181163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.181194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.181413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.181631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.181664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.181841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.181852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.182017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.182051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.182261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.182294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.182450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.182485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.182788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.182943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.182955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.183205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.183217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.183369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.183401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.183617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.183650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.183896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.183928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.184161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.184194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.184473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.184508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.184641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.184674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.857 [2024-11-07 10:55:14.184850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.857 [2024-11-07 10:55:14.184883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.857 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.185017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.185049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.185278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.185310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.185454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.185490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.185691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.185725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.185961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.185993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.186108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.186141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.186411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.186453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.186710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.186743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.186873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.186904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.187042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.187075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.187362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.187395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.187676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.187752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.187935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.187973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.188254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.188288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.188480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.188498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.188607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.188640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.188776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.188810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.189018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.189051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.189355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.189388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.189596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.189632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.189750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.189765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.189942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.189988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.190234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.190269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.858 [2024-11-07 10:55:14.190524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.858 [2024-11-07 10:55:14.190558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.858 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.190887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.190926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.191170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.191188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.191422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.191445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.191612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.191628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.191804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.191839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.192079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.192112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.192391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.192424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.192653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.192686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.192936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.192969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.193110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.193143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.193393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.193427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.193588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.193622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.193845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.193877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.194024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.194067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.194252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.194269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.194364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.194553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.194588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.194839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.194873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.195099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.195133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.195319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.195352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.859 [2024-11-07 10:55:14.195666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.859 qpair failed and we were unable to recover it. 00:26:46.859 [2024-11-07 10:55:14.195820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.195854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.196156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.196189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.196334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.196367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.196649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.196685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.196833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.196867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.197081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.197114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.197320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.197355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.197547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.197581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.197760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.197793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.197949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.197982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.198239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.198273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.198541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.198576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.198791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.198824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.199025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.199058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.199201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.199235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.199363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.199398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.199619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.199653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.199788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.199822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.200096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.200130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.200315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.200349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.200601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.200636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.200831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.200863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.201023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.201056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.201371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.201403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.201620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.860 [2024-11-07 10:55:14.201654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.860 qpair failed and we were unable to recover it. 00:26:46.860 [2024-11-07 10:55:14.201814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.201848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.202079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.202113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.202383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.202418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.202703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.202737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.202874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.202907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.203200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.203233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.203431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.203473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.203705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.203744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.203865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.203881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.203983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.203998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.204140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.204157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.204339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.204372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.204597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.204632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.204826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.204859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.205108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.205289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.205322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.205540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.205557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.205772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.205805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.206000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.206034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.206263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.206296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.206522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.206557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.206692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.206725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.206936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.206952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.207127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.207160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.861 qpair failed and we were unable to recover it. 00:26:46.861 [2024-11-07 10:55:14.207366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.861 [2024-11-07 10:55:14.207399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.207529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.207546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.207739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.207755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.207908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.207942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.208196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.208230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.208452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.208487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.208705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.208737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.208926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.208942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.209074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.209108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.209321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.209353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.209607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.209683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.209956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.209971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.210077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.210249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.210261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.210527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.210730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.210763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.210957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.210969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.211172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.211183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.211392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.211597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.211632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.211942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.211974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.212188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.212221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.212422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.212466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.212653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.212778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.862 [2024-11-07 10:55:14.212809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.862 qpair failed and we were unable to recover it. 00:26:46.862 [2024-11-07 10:55:14.212939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.212972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.213253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.213285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.213426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.213470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.213622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.213664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.213762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.213773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.213853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.213864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.214851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.214862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.215121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.215132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.215276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.215290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.215397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.215411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.215612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.215626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.215725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.215737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.217925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.217936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.218109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.218298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.218402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.218500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.863 [2024-11-07 10:55:14.218616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.863 qpair failed and we were unable to recover it. 00:26:46.863 [2024-11-07 10:55:14.218719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.218736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.218843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.218856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.218954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.218966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.219790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.219993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.220087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.220338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.220605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.220768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.220884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.220897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.221824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.221836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.222010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.222044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.222341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.222375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.222591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.222625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.222769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.222803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.222967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.222979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.223072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.223084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.223176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.223210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.223427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.223473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.223729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.223764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.223905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.223937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.224153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.224187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.224476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.864 [2024-11-07 10:55:14.224512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.864 qpair failed and we were unable to recover it. 00:26:46.864 [2024-11-07 10:55:14.224658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.224829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.224863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.225091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.225386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.225602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.225770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.225879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.225990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.226024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.226223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.226257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.226465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.226503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.226617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.226629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.226811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.226842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.226986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.227018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.227235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.227269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.227507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.227540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.227748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.227780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.227977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.228010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.228208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.228241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.228427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.228468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.228722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.228756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.229014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.865 [2024-11-07 10:55:14.229176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.865 [2024-11-07 10:55:14.229188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.865 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.229386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.229644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.229680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.229829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.229862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.230091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.230103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.230352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.230386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.230611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.230646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.230792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.230824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.231864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.231875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.232011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.232024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.232286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.232319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.232541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.232576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.232756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.232769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.232864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.232903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.233039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.233072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.233319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.233351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.233502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.233537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.233736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.233770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.866 [2024-11-07 10:55:14.233887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.866 [2024-11-07 10:55:14.233900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.866 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.234853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.234893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.235105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.235138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.235339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.235372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.235526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.235559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.235756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.235798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.235888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.235899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.236776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.236788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.237558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.237583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.237764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.237918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.237931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.238968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.867 [2024-11-07 10:55:14.238980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.867 qpair failed and we were unable to recover it. 00:26:46.867 [2024-11-07 10:55:14.239126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.239138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.239386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.239399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.239500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.239512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.239705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.239719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.239868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.239880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.240782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.240810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.240976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.240990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.241236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.241260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.241457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.241494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.241670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.241705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.241925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.241961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.242180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.242193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.242379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.242412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.242626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.242661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.242880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.242915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.243945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.243956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.244160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.244194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.244397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.244432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.244576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.244609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.244735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.244748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.868 [2024-11-07 10:55:14.245053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.868 [2024-11-07 10:55:14.245088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.868 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.245228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.245262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.245453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.245486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.245681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.245714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.245919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.245952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.246158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.246191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.246323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.246358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.246509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.246544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.246702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.246734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.246874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.246909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.247048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.247081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.247260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.247293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.247552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.247566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.247718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.247755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.247883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.247917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.248155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.248189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.248454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.248489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.248695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.248728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.248920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.248954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.249206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.249239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.249545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.249579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.249845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.249891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.249979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.249990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.250209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.250242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.250507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.869 [2024-11-07 10:55:14.250542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.869 qpair failed and we were unable to recover it. 00:26:46.869 [2024-11-07 10:55:14.250772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.250805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.250965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.250999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.251139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.251172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.251386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.251420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.251645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.251661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.251826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.251860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.252003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.252037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.252188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.252222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.252407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.252457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.252627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.252833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.252866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.253925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.253958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.254182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.254216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.255470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.255514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.255697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.255710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.255924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.255958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.256298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.256332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.256560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.256598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.256751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.256784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.256902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.256943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.257105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.257117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.870 [2024-11-07 10:55:14.257222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.870 [2024-11-07 10:55:14.257254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.870 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.257459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.257495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.257712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.257747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.257932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.257945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.258245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.258280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.258476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.258656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.258669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.258845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.258877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.259033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.259067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.259211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.259245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.259511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.259547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.259828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.259841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.259939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.259973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.260337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.260370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.260572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.260607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.260743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.260778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.260942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.260955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.261169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.261203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.261402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.261454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.261587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.261621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.261741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.261753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.261913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.261926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.262048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.262210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.262414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.262639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.871 [2024-11-07 10:55:14.262760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.871 qpair failed and we were unable to recover it. 00:26:46.871 [2024-11-07 10:55:14.262848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.262860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.262956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.262968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.263973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.263985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.264949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.264960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.265975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.265987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.872 [2024-11-07 10:55:14.266898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.872 [2024-11-07 10:55:14.266910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.872 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.267177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.267289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.267403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.267616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.267849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.267993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.268026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.268289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.268323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.268553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.268587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.268738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.268750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.268827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.268838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.269048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.269080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.269268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.269304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.269559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.269726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.269977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.270011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.270327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.270357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.270603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.270616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.270721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.270735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.270810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.270822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.270990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.271957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.271990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.272253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.272288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.272507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.272544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.272748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.272761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.272878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.272912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.273070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.273104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.273327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.273360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.273550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.273585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.873 [2024-11-07 10:55:14.273728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.873 [2024-11-07 10:55:14.273763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.873 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.273907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.273941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.274150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.274163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.274394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.274430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.274576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.274610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.274827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.274861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.275064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.275078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.275259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.275272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.275556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.275569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.275670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.275684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.275881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.276024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.276332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.276367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.276601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.276636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.276781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.276794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.276969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.277003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.277278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.277311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.277536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.277570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.277894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.277928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.278186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.278219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.278478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.278513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.278678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.278887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.278921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.279285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.279319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.279470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.279506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.279770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.279803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.280044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.280057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.280215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.280227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.280542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.280753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.280787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.281069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.281084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.281265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.281279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.281519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.281728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.874 [2024-11-07 10:55:14.281762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.874 qpair failed and we were unable to recover it. 00:26:46.874 [2024-11-07 10:55:14.281955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.281989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.282274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.282307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.282524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.282559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.282719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.282752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.282887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.282901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.283122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.283156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.283387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.283701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.283735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.283902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.283915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.284005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.284017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.284200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.284233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.284496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.284532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.284673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.284706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.284937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.285175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.285208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.285403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.285452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.285608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.285641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.285868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.285902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.286048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.286081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.286350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.286382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.286666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.286701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.286921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.287129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.287164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.287419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.287468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.287606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.287640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.287790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.287823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.288020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.288054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.288305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.288340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.288572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.288605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.288816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.288850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.289065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.289078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.289193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.289226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.289477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.289512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.289738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.289772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.875 qpair failed and we were unable to recover it. 00:26:46.875 [2024-11-07 10:55:14.289926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.875 [2024-11-07 10:55:14.289960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.290228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.290261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.290401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.290445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.290581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.290616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.290842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.290876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.291066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.291079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.291256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.291289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.291473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.291507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.291672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.291747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.292047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.293178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.293211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.293421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.293470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.293625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.293660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.293797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.293831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.294074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.294110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.294306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.294339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.294601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.294637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.294825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.294859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.294985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.295148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.295317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.295518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.295714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.295889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.296209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.296375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.296410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.296609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.296643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.296897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.296932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.297147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.297180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.297448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.297483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.297782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.297817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.298015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.298048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.298324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.298357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.298498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.876 [2024-11-07 10:55:14.298534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.876 qpair failed and we were unable to recover it. 00:26:46.876 [2024-11-07 10:55:14.298655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.298689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.298895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.298912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.300506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.300539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.300676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.300694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.300913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.300931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.301128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.301162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.301360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.301394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.301616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.301651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.301849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.301867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.302087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.302105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.302264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.302494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.302512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.302617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.302634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.302804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.302837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.303254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.303331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.303545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.303587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.303755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.303788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.304044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.304078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.304217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.304251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.304486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.304523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.304689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.304706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.304853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.304870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.305098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.305114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.305214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.305231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.305398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.305443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.305652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.305686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.305924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.305959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.306079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.306096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.306290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.306324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.306573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.877 [2024-11-07 10:55:14.306608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.877 qpair failed and we were unable to recover it. 00:26:46.877 [2024-11-07 10:55:14.306763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.306798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.306927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.306944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.307225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.307242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.307419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.307466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.307666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.307698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.307899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.307933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.308226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.308243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.308420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.308444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.308578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.309522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.309553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.309763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.309817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.310031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.310072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.310333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.310367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.310666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.310702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.310962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.310980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.311152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.311169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.311344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.311378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.311600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.311634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.311940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.311974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.312323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.312357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.312608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.312643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.312877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.313214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.313247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.313401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.313444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.313608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.313643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.313860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.313896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.314046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.314080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.314305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.314322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.314450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.314466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.314632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.314649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.314840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.314857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.315022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.315197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.315234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.315383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.878 [2024-11-07 10:55:14.315416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.878 qpair failed and we were unable to recover it. 00:26:46.878 [2024-11-07 10:55:14.315560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.315595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.315817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.315853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.315999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.316805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.316823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.317049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.317084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.317292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.317326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.317508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.317542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.317775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.317810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.317953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.317986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.318204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.318222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.318327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.318374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.318601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.318638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.318846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.318885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.319107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.319126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.319343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.319360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.319540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.319574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.319780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.319813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.320067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.320298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.320332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.320477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.320511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.320718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.320755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.320952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.320984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.321179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.321213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.322719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.322755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.323009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.323048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.323332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.323370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.323572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.323617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.323912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.323946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.324178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.324197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.324287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.879 [2024-11-07 10:55:14.324303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.879 qpair failed and we were unable to recover it. 00:26:46.879 [2024-11-07 10:55:14.324441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.324460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.324678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.324696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.324904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.325156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.325215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.325358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.325391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.325629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.325665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.325799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.325834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.326859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.326874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.327000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.327018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.327293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.327326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.327589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.327624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.327772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.327792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.328027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.328063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.328324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.328358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.328638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.328681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.328902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.328937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.329182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.329199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.329383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.329420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.329643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.329677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.329953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.329970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.330831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.330865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.331047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.331083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.331220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.331254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.331464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.880 [2024-11-07 10:55:14.331499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.880 qpair failed and we were unable to recover it. 00:26:46.880 [2024-11-07 10:55:14.331659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.331695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.331906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.331943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.332089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.332106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.332346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.332366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.332570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.332587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.332709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.332740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.332912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.332931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.333108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.333143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.333355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.333389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.333643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.333681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.333883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.333918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.334186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.334204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.334438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.334456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.334611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.334628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.334743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.334760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.334912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.334929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.335121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.335139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.335327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.335344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.335598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.335634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.335778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.335812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.336057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.336288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.336323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.336522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.336557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.336716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.336751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.336950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.336985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.337183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.337218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.337405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.337446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.337736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.337771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.338042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.338058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.338300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.338317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.338484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.338506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.338674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.338709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.338954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.338989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.339277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.339311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.881 [2024-11-07 10:55:14.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.881 [2024-11-07 10:55:14.339561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.881 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.339828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.339863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.340140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.340174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.340379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.340414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.340644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.340823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.340857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.341024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.341175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.341209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.341463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.341499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.341652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.341685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.341893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.341929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.342263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.342280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.342494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.342530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.342785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.342819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.342944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.342961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.343073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.343091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.343252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.343269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.343544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.343578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.343849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.343885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.344159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.344176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.344335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.344352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.344542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.344578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.344766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.344801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.344960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.345000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.345271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.345306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.345522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.345572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.345785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.345979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.346013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.346219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.346254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.346493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.346528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.346676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.882 [2024-11-07 10:55:14.346710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.882 qpair failed and we were unable to recover it. 00:26:46.882 [2024-11-07 10:55:14.346897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.346933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.347186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.347202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.347307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.347325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.347565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.347583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.347739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.347757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.347975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.347993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.348209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.348227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.348477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.348495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.348654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.348672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.348802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.348837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.348998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.349034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.349322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.349358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.349570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.349606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.349762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.349797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.350074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.350117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.350306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.350340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.350591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.350627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.350885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.350920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.351054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.351071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.351258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.351299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.351585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.351794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.351838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.351922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.351937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.352115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.352133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.352245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.352263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.352564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.352599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.352862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.352897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.353120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.353138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.353324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.353342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.353551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.353569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.353747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.353784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.354071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.354105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.354302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.354319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.354538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.354614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.883 [2024-11-07 10:55:14.354859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.883 [2024-11-07 10:55:14.354897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.883 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.355157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.355173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.355392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.355595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.355612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.355725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.355742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.355990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.356025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.356231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.356263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.356474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.356509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.356662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.356694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.356889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.356923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.357169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.357202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.357458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.357492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.357640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.357683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.357891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.357925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.358115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.358148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.358356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.358390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.358602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.358619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.358767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.358798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.358949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.358982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.359269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.359302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.359460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.359494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.359652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.359685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.359963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.359998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.360279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.360609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.360642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.360781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.360814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.361081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.361099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.361314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.361330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.361582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.361622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.361765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.361799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.361948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.361981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.362184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.362217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.362423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.362471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.362678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.362711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.362867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.884 [2024-11-07 10:55:14.362883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.884 qpair failed and we were unable to recover it. 00:26:46.884 [2024-11-07 10:55:14.363052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.363084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.363307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.363340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.363560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.363595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.363748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.363781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.364133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.364168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.364315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.364348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.364645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.364679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.364934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.364966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.365161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.365178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.365289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.365324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.365509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.365543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.365749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.365783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.366016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.366049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.366246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.366263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.366444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.366622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.366656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.366795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.366828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.367078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.367398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.367571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.367761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.367899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.367991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.368006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.368245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.368261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.368486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.368503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.368595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.368610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.368793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.368826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.368971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.369003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.369199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.369234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.369492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.369526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.369723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.369757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.369902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.369918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.370195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.370229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.370478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.885 [2024-11-07 10:55:14.370513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.885 qpair failed and we were unable to recover it. 00:26:46.885 [2024-11-07 10:55:14.370666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.370699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.370824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.370840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.370949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.371068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.371103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.371227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.371260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.371597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.371733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.371767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.371910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.372078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.372111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.372367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.372384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.372686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.372762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.372929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.372967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.373253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.373287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.373560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.373598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.373805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.373839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.374130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.374163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.374382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.374416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.374577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.374612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.374768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.374803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.375025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.375059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.375360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.375394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.375620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.375654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.375850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.375884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.376106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.376150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.376355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.376388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.376595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.376613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.376803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.376837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.377047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.377082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.377317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.377352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.377569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.377604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.377732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.377765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.378029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.378062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.378335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.378368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.378589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-11-07 10:55:14.378625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-11-07 10:55:14.378857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.378891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.379112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.379146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.379338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.379372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.379588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.379626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.379772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.379808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.379950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.379985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.380105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.380140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.380393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.380660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.380695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.380887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.380920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.381143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.381177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.381404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.381446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.381584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.381618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.381890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.381922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.382130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.382164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.382444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.382479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.382731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.382807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.382960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.382992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.383270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.383308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.383491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.383693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.383726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.383924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.383958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.384283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.384480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.384515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.384708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.384857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.384890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.385070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.385083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.385312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.385347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.385552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.385585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.385794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-11-07 10:55:14.385837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-11-07 10:55:14.386045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.386080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.386269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.386303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.386450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.386485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.386639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.386672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.386966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.387289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.387462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.387497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.387684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.387878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.387916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.388068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.388081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.388294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.388327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.388570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.388773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.388807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.388971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.389005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.389262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.389297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.389487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.389521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.389667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.389701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.389933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.389967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.390162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.390197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.390513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.390549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.391019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.391051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.391251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.391285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.391513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.391548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.391698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.391732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.391871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.391904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.392196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.392237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.392471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.392507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.392662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.392696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.392900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.392934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.393219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.393232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.393392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.393426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.393585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.393617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.393803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-11-07 10:55:14.393837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-11-07 10:55:14.394146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.394181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.394462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.394497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.394707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.394741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.394894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.395174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.395186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.395401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.395443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.395710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.395744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.395940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.395991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.396203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.396236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.396469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.396503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.396630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.396663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.396864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.396898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.397116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.397128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.397208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.397220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.397375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.397387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.397552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.397566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.397696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.397732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.398050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.398215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.398437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.398666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.398828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.398990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.399003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.399142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.399154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.399307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.399340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.399518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.399553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.399813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.399848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.400045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.400079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.400356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.400389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.400595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.400629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.400856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.401015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.401027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.401210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.401251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.401508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-11-07 10:55:14.401543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-11-07 10:55:14.401689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.401721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.401874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.401908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.402099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.402134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.402404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.402447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.402652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.402686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.402825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.402858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.403187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.403221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.403499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.403535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.403687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.403910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.404175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.404209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.404538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.404788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.404821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.404984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.405017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.405214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.405227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.405388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.405401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.405564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.405598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.405801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.406024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.406057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.406186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.406360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.406393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.406691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.406725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.406949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.406984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.407270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.407304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.407513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.407548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.407712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.407745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.407909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.407943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.408176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.408209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.408429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.408475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.408681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.408709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.408829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.408842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.408953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.408984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.409213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.409246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.409486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-11-07 10:55:14.409521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-11-07 10:55:14.409691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.409725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.409942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.409976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.410203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.410237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.410446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.410482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.410764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.410803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.410967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.411146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.411321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.411511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.411749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.411925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.411958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.412275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.412319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.412504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.412517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.412687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.412720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.412944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.412978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.413115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.413148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.413425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.413447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.413605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.413618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.413781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.413816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.414089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.414122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.414409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.414423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.414587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.414601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.414712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.414725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.414839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.414853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.415016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.415029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.415193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.415206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.415362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.415396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.415637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.415672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.415879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.415911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.416077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.416348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.416382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.416620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.416654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.416868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.416901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.417178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-11-07 10:55:14.417221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-11-07 10:55:14.417464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.417479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.417571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.417582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.417820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.417853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.418160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.418195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.418404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.418417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.418547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.418586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.418735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.418768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.418979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.419013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.419215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.419249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.419493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.419508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.419679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.419695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.419844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.420032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.420067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.420271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.420305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.420465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.420480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.420737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.420895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.420928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.421193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.421227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.421495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.421530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.421793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.421981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.421994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.422179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.422192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.422370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.422403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.422673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.422754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.423131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.423213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.423453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.423476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.423714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.423751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.424040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.424075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.424294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.424313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.424536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.424554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-11-07 10:55:14.424675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-11-07 10:55:14.424693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.424860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.424877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.424980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.424996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.425155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.425173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.425428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.425471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.425700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.425736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.425948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.425982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.426296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.426331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.426449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.426466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.426624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.426642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.426751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.426766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.426980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.426998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.427182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.427215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.427498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.427535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.427743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.427778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.427940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.427977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.428167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.428201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.428361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.428379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.428583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.428619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.428784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.428819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.428967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.429001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.429211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.429229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.429529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.429564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.429705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.429740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.429905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.429939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.430084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.430119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.430331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.430367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.430561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.430596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.430786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.431060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.431094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.431354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.431388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.431584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.431894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.431929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.432059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.432094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-11-07 10:55:14.432238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-11-07 10:55:14.432258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.432497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.432532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.432691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.432727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.432947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.433273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.433308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.433591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.433626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.433913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.433932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.434024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.434040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.434198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.434216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.434388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.434406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.434640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.434674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.434827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.434861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.435051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.435086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.435403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.435446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.435666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.435701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.435853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.435886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.436037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.436055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.436306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.436340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.436558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.436593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.436836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.436985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.437018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.437227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.437262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.437458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.437476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.437609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.437644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.437782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.437816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.437979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.438014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.438337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.438380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.438500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.438519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.438668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.438717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.438921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.438956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.439166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.439200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.439404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.439659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.439694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.439834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.439867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.440019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-11-07 10:55:14.440053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-11-07 10:55:14.440255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.440290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.440488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.440523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.440715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.440748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.440940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.440975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.441187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.441221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.441514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.441548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.441723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.441742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.441919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.441953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.442166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.442200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.442530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.442568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.442800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.442835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.442992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.443027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.443318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.443355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.443553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.443588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.443897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.444216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.444250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.444480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.444516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.444780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.444819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.445015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.445033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.445344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.445380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.445616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.445651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.445865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.445900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.446045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.446078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.446282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.446316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.446592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.446610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.446780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.446798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.446997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.447032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.447220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.447255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.447458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.447493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.447642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.447677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.447916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.447951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.448160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.448197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.448330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.448348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-11-07 10:55:14.448569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-11-07 10:55:14.448648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.448894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.448933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.449201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.449236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.449463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.449498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.449725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.449760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.449975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.450010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.450241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.450253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.450431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.450479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.450636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.450669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.450970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.451004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.451284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.451317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.451558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.451572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.451747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.451781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.451938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.451981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.452190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.452225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.452504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.452518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.452686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.452699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.452893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.452927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.453055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.453089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.453289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.453322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.453547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.453582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.453750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.453785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.453991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.454280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.454386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.454575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.454713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.454892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.454921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.455203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.455236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.455375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.455408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.455678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.455694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.455813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.455826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.455989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.456022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.456233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.456268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.456474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.456510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.456772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.456785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.456877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-11-07 10:55:14.456889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-11-07 10:55:14.456991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.457005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.457198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.457210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.457363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.457376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.457676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.457716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.457870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.457904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.458131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.458165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.458453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.458786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.458819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.459112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.459414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.459482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.459707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.459741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.459954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.459988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.460331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.460365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.460542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.460578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.460720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.460754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.460907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.460941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.461088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.461123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.461267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.461301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.461473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.461493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.461741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.461758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.461893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.461910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.462209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.462244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.462512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.462547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.462722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.462757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.462968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.463003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.463225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.463259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.463522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.463539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.463649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.463668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.463858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.463896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.464056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.464090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.464354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.464394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.464580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-11-07 10:55:14.464598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-11-07 10:55:14.464693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.464711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.464801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.464819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.465074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.465091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.465265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.465306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.465599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.465637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.465832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.465866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.466018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.466053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.466333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.466367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.466626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.466661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.466827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.466863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.467022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.467056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.467264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.467298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.467521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.467557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.467692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.467728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.467864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.467881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.468888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.468925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.469169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.469189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.469290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.469308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.469489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.469525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.469779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.469814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.470031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.470066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.470228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.470262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.470581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.470617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.470778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.470815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.471011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.471047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.471273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.471317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.471536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.471573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.471790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.471825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.472044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.472078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.472301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.472345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.472537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.472558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.472678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.472696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.472866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.472884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.473071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.473106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.473340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.473379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.473611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.473649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.473804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.473840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-11-07 10:55:14.473997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-11-07 10:55:14.474033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.474239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.474275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.474491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.474512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.474631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.474648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.474806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.474824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.475022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.475282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.475479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.475674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.475865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.475993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.476109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.476249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.476401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.476565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.476811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.476833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.477926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.477942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.478207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.478457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.478475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.478658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.478675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.478848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.478866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.478998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.479017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.479283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.479302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.479475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.479493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.479728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.479772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.479896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.479917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.480039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.480057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.480309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.480328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.480516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.480694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.480712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.480940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.480957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.481152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-11-07 10:55:14.481169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-11-07 10:55:14.481291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.481309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.481499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.481520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.481721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.481740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.481833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.481851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.482021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.482039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.482245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.482268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.482427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.482450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.482609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.482627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.482815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.482851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.483078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.483116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.483404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.483452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.483689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.483725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.483871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.483905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.484137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.484173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.484424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.484454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.484662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.484819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.484837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.484964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.485140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.485181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.485358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.485395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.485549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.485567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.485695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.485712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.485958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.485974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.486180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.486216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.486427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.486474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.486624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.486659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.487619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.487876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.487895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.488883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.488903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.489091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.489109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.489228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.489248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.489368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-11-07 10:55:14.489404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-11-07 10:55:14.489711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.489749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.489911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.489947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.490100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.490134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.490289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.490326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.490551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.490570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.490783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.490816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.491008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.491042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.491329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.491368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.491641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.491676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.491911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.491948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.492104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.492138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.492402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.492449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.492643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.492677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.492890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.492924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.493219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.493253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.493411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.493463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.493649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.493667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.493773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.493790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.493912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.493929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.494057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.494159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.494278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.494532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.494712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.494993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.495029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.495212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.495449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.495485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.495716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.495735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.495909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.495929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.496255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.496289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.496517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.496792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.496827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.496967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.497003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.497295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-11-07 10:55:14.497333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-11-07 10:55:14.497488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.497524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.497697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.497716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.497831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.497850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.498107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.498129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.498310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.498328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.498532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.498551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.498732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.498749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.498857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.498875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.499875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.499990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.500011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.500129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.500152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.500384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.500564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.500604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.500824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.500861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.501002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.501249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.501286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.501477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.501516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.501742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.501759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.501933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.501977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.502264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.502306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.502448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.502678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.180 [2024-11-07 10:55:14.502923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.180 [2024-11-07 10:55:14.502941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.180 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.503117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.503136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.503344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.503380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.503628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.503670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.503878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.503913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.504051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.504094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.504401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.504445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.504593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.504629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.504771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.504789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.504904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.504921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.505097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.505115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.505291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.505337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.505625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.505665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.505913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.506165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.506200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.506392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.506413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.506552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.506589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.506855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.507115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.507149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.507442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.507477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.507741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.507759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.507950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.507968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.508176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.508292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.508327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.508465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.508500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.508713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.508748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.508878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.508913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.509077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.509112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.509372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.509446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2bb20 (9): Bad file descriptor 00:26:47.181 [2024-11-07 10:55:14.509764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.509844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.510141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.510219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.510445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.510484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.510715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.510908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.510923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.181 [2024-11-07 10:55:14.511109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.181 [2024-11-07 10:55:14.511143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.181 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.511450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.511488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.511746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.511761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.511859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.512042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.512076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.512377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.512414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.512644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.512658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.512822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.512836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.512944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.512979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.513187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.513222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.513371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.513405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.513639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.513654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.513815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.513850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.514073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.514108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.514399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.514444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.514666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.514702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.514905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.514940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.515185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.515219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.515425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.515474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.515654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.515668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.515864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.515900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.516037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.516079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.516339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.516380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.516469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.516483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.516731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.516766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.517032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.517067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.517337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.517371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.517607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.517962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.517998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.518144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.518180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.518449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.518464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.518741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.518775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.518991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.519025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.519368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.519402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.182 qpair failed and we were unable to recover it. 00:26:47.182 [2024-11-07 10:55:14.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.182 [2024-11-07 10:55:14.519666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.519899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.519934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.520081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.520115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.520381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.520416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.520622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.520656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.520815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.520850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.521075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.521111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.521314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.521349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.521611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.521626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.521792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.521806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.521983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.521999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.522137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.522171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.522454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.522491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.522703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.522740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.522967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.523002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.523247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.523281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.523522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.523560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.523862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.523877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.524182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.524217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.524420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.524462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.524743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.524780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.525046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.525079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.525292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.525327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.525579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.525617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.525925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.525961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.526241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.526276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.526486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.526521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.526750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.526767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.527012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.527046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.527351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.527387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.527622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.527636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.527825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.527838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.527928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.527940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.528183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.183 [2024-11-07 10:55:14.528220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.183 qpair failed and we were unable to recover it. 00:26:47.183 [2024-11-07 10:55:14.528408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.528454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.528649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.528686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.528984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.528997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.529143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.529178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.529439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.529454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.529565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.529600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.529828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.529863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.530093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.530129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.530346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.530381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.530670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.530715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.530871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.531140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.531175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.531396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.531411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.531542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.531580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.531792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.531827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.532133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.532169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.532311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.532348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.532564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.532602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.532891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.533038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.533053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.533213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.533249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.533483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.533520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.533784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.533818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.533968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.534001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.534291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.534328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.534598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.534614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.534883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.535105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.535141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.535347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.535381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.535576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.535592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.535686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.535700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.535885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.535920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.536062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.536096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.536407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.536462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.184 qpair failed and we were unable to recover it. 00:26:47.184 [2024-11-07 10:55:14.536604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.184 [2024-11-07 10:55:14.536617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.536781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.536794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.536943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.536957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.537126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.537139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.537303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.537317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.537427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.537450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.537617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.537651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.537851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.537886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.538183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.538217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.538537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.538573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.538874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.538911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.539220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.539255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.539511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.539524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.539787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.539801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.540016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.540030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.540182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.540195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.540343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.540356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.540536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.540572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.540848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.541034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.541068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.541298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.541332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.541559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.541767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.541802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.541988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.542022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.542341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.542621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.542635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.542859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.542895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.543087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.543120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.543401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.543446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.185 qpair failed and we were unable to recover it. 00:26:47.185 [2024-11-07 10:55:14.543664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.185 [2024-11-07 10:55:14.543698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.543878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.543891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.544145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.544179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.544329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.544363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.544585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.544620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.544762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.544797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.545011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.545043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.545352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.545399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.545560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.545573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.545773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.545807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.546107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.546146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.546270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.546304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.546486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.546500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.546742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.546775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.547062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.547096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.547380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.547413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.547633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.547668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.547894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.547907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.548072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.548105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.548390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.548425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.548720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.548733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.548954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.548966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.549177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.549190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.549355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.549368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.549602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.549638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.549854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.549888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.550175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.550209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.550418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.550638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.550651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.550923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.550956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.551269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.551303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.551526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.551540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.551796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.551810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.551999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.552032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.186 qpair failed and we were unable to recover it. 00:26:47.186 [2024-11-07 10:55:14.552318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.186 [2024-11-07 10:55:14.552353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.552487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.552523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.552732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.552745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.552836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.552849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.553121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.553155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.553358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.553391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.553726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.553992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.554006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.554080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.554092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.554248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.554261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.554423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.554441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.554690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.554714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.555015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.555049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.555332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.555365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.555656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.555692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.555971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.556005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.556295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.556335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.556496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.556532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.556722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.556735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.556831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.556844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.557013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.557027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.557196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.557231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.557427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.557472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.557694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.557728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.557944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.557978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.558192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.558225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.558513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.558548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.558748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.558761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.558975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.559008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.559294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.559328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.559637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.559674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.559931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.559965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.560206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.560240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.187 [2024-11-07 10:55:14.560555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.187 [2024-11-07 10:55:14.560591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.187 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.560864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.560877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.561047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.561081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.561268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.561302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.561534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.561570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.561851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.561884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.562165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.562200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.562463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.562498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.562804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.562838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.563028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.563062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.563273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.563308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.563537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.563550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.563816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.563850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.564123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.564158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.564344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.564377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.564593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.564606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.564879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.564913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.565219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.565253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.565516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.565552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.565816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.565851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.566145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.566179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.566301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.566335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.566524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.566769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.566808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.567017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.567051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.567243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.567276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.567539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.567573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.567856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.567890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.568195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.568229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.568426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.568468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.568729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.568763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.569046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.569080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.569407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.569466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.569784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.569819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.188 [2024-11-07 10:55:14.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.188 [2024-11-07 10:55:14.570137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.188 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.570421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.570466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.570742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.570776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.571058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.571092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.571330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.571372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.571643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.571666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.571830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.571850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.572014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.572028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.572276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.572310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.572593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.572633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.572794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.572829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.573124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.573315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.573349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.573627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.573640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.573797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.573979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.574013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.574304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.574340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.574486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.574522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.574775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.574868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.574881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.574993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.575006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.575220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.575233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.575447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.575460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.575620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.575633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.575814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.575848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.576042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.576076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.576291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.576324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.576510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.576524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.576775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.576809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.577019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.577059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.577343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.577377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.577649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.577663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.577901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.577913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.578076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.578089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.189 qpair failed and we were unable to recover it. 00:26:47.189 [2024-11-07 10:55:14.578259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.189 [2024-11-07 10:55:14.578293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.578523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.578558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.578767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.578781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.578960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.578995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.579276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.579310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.579600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.579636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.579896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.579931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.580232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.580266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.580540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.580575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.580872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.580907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.581141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.581175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.581385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.581726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.581760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.581970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.582004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.582202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.582236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.582378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.582412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.582630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.582665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.582911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.582924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.583161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.583174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.583344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.583357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.583509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.583522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.583751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.583785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.583996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.584032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.584240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.584275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.584564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.584602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.584735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.584770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.585057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.585092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.585364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.585376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.585523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.585537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.585807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.190 [2024-11-07 10:55:14.585840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.190 qpair failed and we were unable to recover it. 00:26:47.190 [2024-11-07 10:55:14.586193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.586227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.586490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.586525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.586819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.586854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.587162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.587423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.587471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.587685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.587726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.587992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.588027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.588237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.588270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.588496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.588533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.588779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.588791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.588948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.588961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.589178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.589212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.589450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.589486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.589755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.589790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.590074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.590108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.590393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.590428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.590705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.590719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.590907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.590920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.591139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.591173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.591368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.591402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.591641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.591654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.591919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.591932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.592100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.592113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.592276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.592289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.592449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.592484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.592712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.592746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.592952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.592966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.593210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.593244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.593373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.593408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.593649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.593684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.593972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.594006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.594279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.594293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.594390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.594403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.594627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.191 [2024-11-07 10:55:14.594640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.191 qpair failed and we were unable to recover it. 00:26:47.191 [2024-11-07 10:55:14.594881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.594895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.595128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.595141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.595355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.595368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.595588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.595827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.595840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.596049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.596062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.596274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.596287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.596524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.596537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.596724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.596737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.596881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.596895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.597152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.597186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.597394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.597444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.597658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.597672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.597813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.597827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.597990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.598003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.598256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.598289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.598490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.598527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.598824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.598858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.599123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.599157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.599492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.599707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.599742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.599976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.600158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.600193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.600480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.600516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.600720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.600754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.600962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.600975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.601164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.601199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.601399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.601462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.601687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.601722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.601927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.601961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.602161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.602196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.602479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.602519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.602762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.602795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.603059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-11-07 10:55:14.603094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-11-07 10:55:14.603320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.603366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.603533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.603546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.603809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.603845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.604061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.604095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.604418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.604509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.604869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.604947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.605261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.605299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.605461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.605497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.605653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.605686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.605969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.606003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.606143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.606176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.606368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.606402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.606672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.606689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.606864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.606880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.607131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.607164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.607430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.607474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.607698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.607716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.607879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.607901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.608078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.608112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.608305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.608339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.608575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.608609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.608801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.608834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.609968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.609985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.610205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.610222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.610401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.610418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.610670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.610687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.610842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.611045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.611062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.611217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-11-07 10:55:14.611233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-11-07 10:55:14.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.611503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.611771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.611789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.612031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.612047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.612296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.612332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.612582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.612874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.612908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.613185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.613218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.613523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.613541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.613708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.613725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.613893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.613910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.614071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.614105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.614249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.614283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.614558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.614592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.614877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.614895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.615141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.615158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.615262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.615279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.615516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.615533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.615702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.615719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.615969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.616002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.616193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.616226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.616373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.616406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.616662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.616954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.616987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.617296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.617336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.617548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.617583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.617822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.617855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.618131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.618175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.618461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.618495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.618705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.618738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.619001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.619034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.619296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.619330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.619590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.619625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.619842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.619875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.620140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.620174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-11-07 10:55:14.620490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-11-07 10:55:14.620508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.620686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.620702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.620863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.620897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.621143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.621178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.621395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.621429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.621661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.621694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.621946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.621963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.622136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.622153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.622402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.622419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.622594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.622611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.622806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.622840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.623046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.623080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.623308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.623341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.623550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.623569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.623844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.623861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.624024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.624041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.624237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.624257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.624487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.624504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.624758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.624790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.625058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.625091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.625397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.625430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.625634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.625667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.625952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.625985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.626180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.626214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.626476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.626511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.626704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.626737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.626949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.626982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.627275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.627308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.627596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.627632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.627912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.627946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-11-07 10:55:14.628233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-11-07 10:55:14.628266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.628472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.628507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.628792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.628836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.629094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.629136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.629427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.629473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.629755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.629789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.629980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.630013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.630294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.630617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.630652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.630931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.630968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.631195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.631228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.631457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.631494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.631635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.631669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.631898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.631933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.632158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.632192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.632378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.632412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.632736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.632770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.633078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.633120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.633330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.633364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.633639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.633868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.633885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.634113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.634146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.634362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.634395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.634599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.634633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.634784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.634817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.635018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.635051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.635364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.635404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.635683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.635700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.635913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.635929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.636200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.636216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.636402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.636419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.636598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.636614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.636842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.636859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.637063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-11-07 10:55:14.637096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-11-07 10:55:14.637353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.637387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.637603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.637622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.637899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.637932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.638093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.638127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.638393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.638428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.638735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.638769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.638926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.638943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.639174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.639423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.639471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.639767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.639800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.640090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.640122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.640405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.640453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.640677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.640693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.640847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.640864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.641040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.641073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.641284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.641317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.641603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.641641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.641920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.641953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.642240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.642274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.642555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.642590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.642798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.642832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.643036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.643071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.643264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.643297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.643562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.643612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.643835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.643852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.644139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.644172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.644311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.644345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.644624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.644941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.644958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.645133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.645151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.645306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.645339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.645623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.645657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.645965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.646005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.646202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-11-07 10:55:14.646236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-11-07 10:55:14.646528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.646562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.646844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.646889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.647095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.647112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.647358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.647374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.647542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.647558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.647730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.647763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.648069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.648102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.648331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.648365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.648631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.648666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.648933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.648950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.649255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.649289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.649518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.649553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.649796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.649829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.649978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.650012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.650206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.650240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.650521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.650555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.650841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.650875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.651084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.651117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.651263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.651296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.651620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.651654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.651937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.651971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.652258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.652292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.652486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.652520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.652726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.652760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.652888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.652921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.653225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.653258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.653460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.653713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.653746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.654039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.654074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.654351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.654384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.654673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.654707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.654994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.655028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.655311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.655344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.655611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-11-07 10:55:14.655645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-11-07 10:55:14.655835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.655852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.656017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.656033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.656261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.656521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.656538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.656793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.656812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.656907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.656923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.657165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.657198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.657428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.657506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.657803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.657837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.658039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.658072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.658274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.658307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.658592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.658626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.658846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.658879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.659107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.659139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.659395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.659429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.659735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.659768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.660072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.660105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.660371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.660404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.660706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.660739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.661035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.661083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.661350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.661383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.661587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.661632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.661850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.661867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.662014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.662031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.662281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.662314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.662533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.662568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.662849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.662882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.663003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.663036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.663261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.663293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.663571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.663605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.663799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.663833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.664124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.664158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.664445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.664480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.664712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-11-07 10:55:14.664746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-11-07 10:55:14.664939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.664955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.665146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.665180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.665373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.665407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.665726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.665760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.665986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.666019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.666235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.666268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.666476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.666512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.666730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.666764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.667032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.667048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.667218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.667235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.667482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.667502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.667735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.667768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.667968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.668002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.668281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.668313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.668629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.668663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.668952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.668985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.669262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.669296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.669584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.669620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.669848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.669864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.670097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.670129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.670411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.670455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.670723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.670739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.670853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.670870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.671097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.671130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.671407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.671451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.671733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.671765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.671973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.672006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.672214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.672247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.672472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.672696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.672730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-11-07 10:55:14.672993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-11-07 10:55:14.673025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.673304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.673337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.673615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.673649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.673937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.673970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.674247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.674280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.674503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.674536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.674819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.674852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.675044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.675077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.675287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.675321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.675525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.675558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.675812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.676106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.676141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.676400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.676462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.676741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.676774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.677035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.677067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.677356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.677389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.677604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.677638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.677832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.677849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.678028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.678062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.678247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.678280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.678504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.678544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.678817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.678834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.679053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.679069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.679330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.679378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.679574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.679609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.679871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.679905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.680088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.680104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.680274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.680290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.680463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.680499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.680636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.680859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.680892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.681145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.681161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.681391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.681424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.681698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-11-07 10:55:14.681731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-11-07 10:55:14.681974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.681991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.682230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.682264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.682542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.682577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.682781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.683024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.683207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.683238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.683373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.683405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.683697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.683730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.683942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.684206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.684239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.684366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.684399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.684713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.684748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.685022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.685055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.685321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.685354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.685561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.685596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.685865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.685898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.686095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.686112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.686355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.686371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.686488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.686506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.686681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.686715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.686919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.686952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.687140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.687174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.687376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.687410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.687677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.687709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.687832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.687864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.688123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.688156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.688460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.688499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.688778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.688811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.689067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.689100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.689297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.689330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.689531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.689564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.689765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.689797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.690075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.690107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.690388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.690421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-11-07 10:55:14.690640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-11-07 10:55:14.690673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.690883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.690915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.691170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.691202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.691415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.691459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.691716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.691750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.691950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.691983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.692265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.692282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.692456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.692473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.692696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.692729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.692941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.692974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.693239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.693272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.693498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.693532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.693740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.693773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.694026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.694043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.694230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.694247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.694480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.694497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.694761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.694922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.694939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.695107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.695123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.695283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.695317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.695578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.695612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.695897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.695930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.696218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.696251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.696481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.696517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.696723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.696756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.697034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.697067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.697204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.697238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.697494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.697527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.697808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.697841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.698154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.698410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.698453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.698749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.698782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.698985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.699004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.699178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.699195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-11-07 10:55:14.699438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-11-07 10:55:14.699456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.699536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.699573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.699794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.699828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.700054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.700087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.700271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.700288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.700464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.700481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.700715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.700732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.700913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.700945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.701092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.701125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.701405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.701447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.701726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.701773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.702028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.702044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.702360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.702394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.702692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.702995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.703041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.703264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.703280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.703523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.703540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.703828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.703860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.704051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.704084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.704364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.704397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.704598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.704632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.704937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.704969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.705177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.705210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.705469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.705504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.705720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.705753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.706037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.706054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.706276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.706292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.706391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.706407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.706647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.706680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.706818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.706850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.707166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.707199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.707449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.707484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.707795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.707812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.707977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.707994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-11-07 10:55:14.708117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-11-07 10:55:14.708150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.708355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.708388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.708593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.708895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.708912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.709105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.709235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.709666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.709908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.709986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.710246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.710278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.710417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.710460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.710592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.710625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.710914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.710947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.711250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.711284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.711486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.711519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.711806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.711839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.712125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.712159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.712455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.712701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.712733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.713018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.713051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.713336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.713368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.713599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.713634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.713857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.713890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.714136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.714153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.714396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.714413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.714641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.714658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.714911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.714927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.715154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.715171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.715393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.715409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.715649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-11-07 10:55:14.715666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-11-07 10:55:14.715915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.715950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.716224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.716494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.716529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.716741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.716774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.716966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.716999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.717287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.717320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.717522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.717685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.717725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.717988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.718021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.718331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.718365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.718653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.718687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.718964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.718997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.719208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.719241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.719563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.719849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.719883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.720158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.720191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.720457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.720491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.720795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.720829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.721025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.721058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.721338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.721371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.721689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.721724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.722008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.722042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.722236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.722268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.722582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.722617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.722808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.722842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.723047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.723079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.723286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.723302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.723555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.723590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.723774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.723790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.724055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.724089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.724346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.724380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.724683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.724717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.724925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.724957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.725227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-11-07 10:55:14.725243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-11-07 10:55:14.725399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.725416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.725644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.725682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.725934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.725971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.726269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.726303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.726606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.726806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.726819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.727109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.727200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.727533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.727570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.727785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.727818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.728019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.728051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.728270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.728286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.728547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.728563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.728808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.728839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.729127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.729160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.729391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.729424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.729674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.729710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.729908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.729947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.730111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.730127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.730354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.730387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.730708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.730934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.730951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.731123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.731157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.731451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.731486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.731761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.731794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.732040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.732072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.732402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.732447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.732676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.732709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.733025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.733059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.733206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.733239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.733557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.733791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.734020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.734037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.734225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.734242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.734419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-11-07 10:55:14.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-11-07 10:55:14.734697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.734729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.734919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.734952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.735212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.735229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.735458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.735475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.735720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.735737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.735991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.736007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.736172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.736188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.736442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.736459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.736706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.736741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.737027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.737060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.737351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.737385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.737667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.737909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.737958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.738268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.738308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.738522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.738560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.738860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.738894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.739100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.739134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.739418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.739466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.739677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.739711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.739899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.739934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.740138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.740310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.740327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.740556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.740592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.740873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.740906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.741125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.741159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.741453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.741489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.741690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.741724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.741954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.741988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.742255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.742289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.742550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.742585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.742794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.742828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.743113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.743147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.743355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.743389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.743635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-11-07 10:55:14.743672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-11-07 10:55:14.743980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.744014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.744281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.744315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.744537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.744573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.744862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.744896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.745101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.745135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.745427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.745479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.745684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.745719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.745999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.746032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.746295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.746330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.746642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.746677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.746927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.746961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.747273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.747308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.747570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.747605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.747809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.747843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.748128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.748162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.748394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.748427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.748709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.748743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.748926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.748961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.749219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.749237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.749490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.749528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.749844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.749879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.750081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.750115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.750395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.750430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.750739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.750774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.750969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.751004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.751221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.751254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.751519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.751554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.751831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.751866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.752153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.752186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.752402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.752445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.752752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.752786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.753071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.753105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.753382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.753406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.753582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.753600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-11-07 10:55:14.753853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-11-07 10:55:14.753886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.754088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.754123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.754352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.754386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.754657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.754692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.754961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.754995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.755195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.755213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.755408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.755454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.755744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.755779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.756045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.756078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.756375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.756413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.756719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.756753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.756962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.756996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.757295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.757329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.757469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.757505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.757647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.757681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.757944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.757978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.758262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.758296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.758516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.758552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.758792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.758808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.759059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.759097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.759240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.759273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.759476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.759510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.759738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.759772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.760064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.760082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.760241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.760258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.760415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.760440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.760672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.760708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.760935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.760969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.761223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.761258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.761484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.761520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.761767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-11-07 10:55:14.761801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-11-07 10:55:14.762105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.762139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.762422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.762467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.762755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.762789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.763025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.763059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.763321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.763355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.763502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.763538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.763821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.763856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.764137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.764171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.764387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.764624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.764658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.764973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.765007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.765294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.765328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.765526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.765563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.765823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.765857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.766130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.766165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.766377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.766410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.766709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.766744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.767024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.767058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.767286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.767304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.767478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.767496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.767676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.767854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.767872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.768046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.768063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.768315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.768349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.768637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.768673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.768895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.768930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.769126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.769160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.769363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.769396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.769729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.769748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.769993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.770011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.770255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.770273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.770560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.770596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.770813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.770848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-11-07 10:55:14.771161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-11-07 10:55:14.771195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.771503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.771792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.771833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.772141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.772176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.772462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.772479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.772719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.772737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.773024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.773059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.773359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.773394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.773688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.773723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.773919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.773952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.774214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.774247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.774471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.774508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.774712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.774748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.775008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.775222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.775258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.775468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.775505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.775794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.775830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.775994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.776028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.776311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.776345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.776630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.776949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.776983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.777269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.777303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.777588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.777624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.777774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.777810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.778101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.778429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.778477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.778762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.778797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.779062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.779079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.779278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.779295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.779518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.779539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.779826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.780094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.780128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.780426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.780472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.780776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.780811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.781091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-11-07 10:55:14.781126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-11-07 10:55:14.781319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.781353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.781636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.781671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.781958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.781993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.782230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.782247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.782420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.782463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.782735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.782769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.782981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.783014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.783320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.783337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.783627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.783663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.783949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.783983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.784265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.784298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.784519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.784554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.784760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.784795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.784984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.785017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.785290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.785308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.785564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.785582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.785736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.785754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.786001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.786018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.786114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.786131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.786328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.786345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.786580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.786597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.786792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.786812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.787076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.787093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.787243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.787260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.787483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.787501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.787675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.787693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.787862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.787896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.788153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.788189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.788455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.788490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.788693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.788726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.788996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.789037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.789323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.789358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.789637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.789672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.789881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-11-07 10:55:14.790057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-11-07 10:55:14.790093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.790361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.790396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.790616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.790652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.790937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.790984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.791271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.791305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.791572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.791608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.791892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.791928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.792212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.792245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.792527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.792563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.792802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.792836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.793024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.793042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.793233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.793267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.793528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.793564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.793858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.793893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.794110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.794144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.794431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.794685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.794719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.794933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.794967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.795250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.795292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.795515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.795534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.795762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.795780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.795944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.795961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.796206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.796224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.796344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.796378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.796664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.796699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.796922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.796956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.797171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.797206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.797467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.797486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.797645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.797680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.797888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.797923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.798185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.798220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.798448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.798466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.798719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.798737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.799027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.799061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.799271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-11-07 10:55:14.799305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-11-07 10:55:14.799623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.799798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.799831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.800058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.800091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.800367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.800385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.800603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.800621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.800801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.800819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.801040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.801058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.801304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.801339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.801577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.801611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.801762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.801796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.802086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.802119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.802425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.802487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.802786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.802821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.803105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.803140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.803425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.803472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.803778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.803812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.804027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.804061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.804247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.804264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.804537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.804574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.804779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.804814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.805013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.805033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.805290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.805325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.805523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.805558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.805820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.805855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.806065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.806100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.806359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.806376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.806623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.806640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.806811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.806828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.807057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.807092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.807283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.807317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.807511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.807547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.807794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.807812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-11-07 10:55:14.807960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-11-07 10:55:14.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.808134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.808168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.808459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.808495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.808648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.808682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.808970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.809006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.809283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.809529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.809565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.809806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.809840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.810108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.810143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.810358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.810392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.810689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.810725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.810999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.811034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.811322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.811356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.811598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.811633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.811836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.811870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.812065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.812104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.812331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.812349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.812498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.812771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.812806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.813036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.813054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.813284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.813319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.813510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.813825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.813859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.814076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.814112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.814397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.814414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.814704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.814722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.814973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.814990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.815169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.815186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.815382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.815399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.815536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.815841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.815875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.816165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.816199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.816481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.816515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-11-07 10:55:14.816735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-11-07 10:55:14.816770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.817082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.817116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.817374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.817391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.817628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.817646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.817798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.817815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.817982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.818017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.818159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.818193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.818417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.818648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.818685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.818897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.818931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.819202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.819236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.819496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.819514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.819678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.819696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.819950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.819984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.820289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.820306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.820476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.820493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.820722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.820757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.820970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.821004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.821207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.821224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.821397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.821430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.821731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.821766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.821889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.821922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.822132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.822167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.822400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.822460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.822752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.822787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.822982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.823227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.823463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.823576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.823815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.823946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.823964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.824213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.824247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.824455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.824777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.824811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.825099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.217 [2024-11-07 10:55:14.825133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-11-07 10:55:14.825418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.825444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.825670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.825689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.825948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.826156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.826190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.826481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.826516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.826795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.826830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.827121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.827156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.827424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.827448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.827690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.827708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.218 [2024-11-07 10:55:14.827823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.218 [2024-11-07 10:55:14.827841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.218 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.828086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.828103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.828230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.828248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.828487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.828506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.828622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.828640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.828897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.828914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.829068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.829088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.829258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.829275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.829520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.829538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.829690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.829707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.829972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.829990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.830229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.830246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.830533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.830750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.831047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.831081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.831394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.831430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.831716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.831751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.832057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.832092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.832330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.832347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.832577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.832595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.495 [2024-11-07 10:55:14.832752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.495 [2024-11-07 10:55:14.832770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.495 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.832967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.833001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.833290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.833326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.833545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.833580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.833715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.833749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.833972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.834362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.834608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.834626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.834793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.834810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.835061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.835096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.835303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.835337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.835624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.835660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.835922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.835957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.836262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.836303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.836600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.836636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.836938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.836972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.837168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.837203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.837411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.837459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.837696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.837713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.837943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.837960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.838153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.838187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.838403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.838445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.838735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.838770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.839036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.839070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.839360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.839394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.839607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.839642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.839950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.839984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.840281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.840315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.840603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.840622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.840849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.840867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.841049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.841066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.841248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.841282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.841489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.841525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.841811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.841845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.842112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.842147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.496 qpair failed and we were unable to recover it. 00:26:47.496 [2024-11-07 10:55:14.842455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.496 [2024-11-07 10:55:14.842490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.842739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.842774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.843070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.843105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.843375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.843409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.843651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.843687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.843967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.844013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.844269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.844303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.844532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.844812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.845060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.845094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.845285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.845303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.845557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.845593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.845855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.845890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.846202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.846400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.846579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.846614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.846876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.846911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.847169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.847187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.847443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.847461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.847617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.847903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.848223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.848258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.848464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.848481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.848713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.848730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.848919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.848953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.849170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.849205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.849419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.849473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.849676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.849694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.849919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.849936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.850200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.850420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.850451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.850701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.850718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.850890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.850908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.851103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.851137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.851423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.497 [2024-11-07 10:55:14.851469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.497 qpair failed and we were unable to recover it. 00:26:47.497 [2024-11-07 10:55:14.851614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.851630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.851876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.851893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.852089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.852107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.852280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.852297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.852409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.852455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.852729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.852763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.853001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.853035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.853226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.853260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.853559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.853597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.853866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.853901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.854109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.854144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.854509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.854549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.854756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.854794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.855096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.855131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.855347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.855389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.855614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.855628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.855873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.855907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.856177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.856210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.856542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.856578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.856865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.857138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.857172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.857458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.857494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.857810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.857844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.858132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.858167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.858493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.858544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.858830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.858864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.859129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.859162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.859487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.859523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.859808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.859841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.860065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.860098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.860319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.860332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.860544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.860557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.860811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.861092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.861125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.498 qpair failed and we were unable to recover it. 00:26:47.498 [2024-11-07 10:55:14.861351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.498 [2024-11-07 10:55:14.861364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.861612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.861646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.861994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.862027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.862251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.862489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.862502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.862767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.862800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.863014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.863047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.863258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.863271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.863441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.863454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.863698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.863711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.863870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.863883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.864150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.864183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.864422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.864715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.864728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.864871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.864884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.865052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.865085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.865277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.865310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.865606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.865647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.865939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.865972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.866239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.866274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.866499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.866533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.866669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.866701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.866979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.866992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.867149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.867162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.867281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.867313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.867522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.867558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.867867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.867901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.868188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.868202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.868467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.868759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.868793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.868999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.869033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.869299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.869514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.869549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.869743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.499 [2024-11-07 10:55:14.869777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.499 qpair failed and we were unable to recover it. 00:26:47.499 [2024-11-07 10:55:14.870063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.870097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.870309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.870597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.870610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.870856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.870890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.871036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.871070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.871360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.871394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.871656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.871691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.871990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.872247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.872280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.872472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.872485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.872736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.872771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.872978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.873012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.873210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.873245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.873521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.873557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.873867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.873900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.874110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.874144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.874364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.874397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.874662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.874676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.874856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.874868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.875021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.875034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.875192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.875205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.875440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.875453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.875668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.875681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.875898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.875914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.876147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.876160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.876306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.876319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.876407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.876420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.876663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.876699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.876901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.876934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.500 [2024-11-07 10:55:14.877217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.500 [2024-11-07 10:55:14.877250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.500 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.877459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.877495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.877780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.877814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.877948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.877981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.878262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.878296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.878561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.878574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.878805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.879116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.879150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.879390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.879403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.879575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.879588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.879831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.879864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.880074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.880296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.880330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.880562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.880597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.880887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.880921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.881157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.881457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.881492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.881782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.881817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.882089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.882123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.882416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.882462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.882588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.882621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.882892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.882925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.883207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.883240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.883375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.883407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.883601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.883614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.883851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.883864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.884035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.884068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.884312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.884576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.884590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.884804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.884816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.884961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.884974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.885149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.885183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.885316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.885350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.885557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.885870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.501 [2024-11-07 10:55:14.885910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.501 qpair failed and we were unable to recover it. 00:26:47.501 [2024-11-07 10:55:14.886124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.886157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.886359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.886394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.886715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.886749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.887006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.887229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.887262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.887548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.887891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.887926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.888127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.888152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.888404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.888462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.888800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.888839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.889063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.889097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.889360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.889395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.889617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.889653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.889897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.890217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.890250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.890476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.890512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.890795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.890829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.891128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.891163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.891428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.891451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.891604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.891621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.891819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.891854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.892008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.892042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.892322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.892356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.892523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.892558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.892772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.892806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.893089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.893123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.893374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.893519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.893538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.893787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.893820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.894026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.894060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.894342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.894377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.894601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.894637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.894848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.894883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.895199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.895233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.502 [2024-11-07 10:55:14.895504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.502 [2024-11-07 10:55:14.895540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.502 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.895749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.895988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.896024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.896251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.896286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.896595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.896631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.896919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.896953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.897233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.897268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.897601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.897619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.897868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.897886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.898062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.898079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.898274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.898308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.898596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.898630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.898918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.898953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.899230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.899275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.899580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.899615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.899809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.899844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.900127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.900161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.900417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.900439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.900638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.900656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.900933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.900973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.901234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.901268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.901545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.901579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.901913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.901948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.902159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.902198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.902365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.902617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.902634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.902895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.902943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.903252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.903287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.903487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.903505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.903764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.903799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.903990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.904024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.904288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.904331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.904483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.904501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.904696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.904713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.904872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.904905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.503 qpair failed and we were unable to recover it. 00:26:47.503 [2024-11-07 10:55:14.905113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.503 [2024-11-07 10:55:14.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.905440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.905476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.905689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.905722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.905916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.905951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.906082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.906116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.906327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.906362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.906555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.906589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.906716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.906751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.906943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.906976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.907293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.907328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.907615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.907651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.907929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.907971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.908120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.908154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.908333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.908526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.908544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.908730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.908764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.908997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.909032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.909251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.909287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.909481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.909516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.909777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.909815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.910095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.910130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.910445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.910481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.910684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.910718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.910981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.911015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.911304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.911338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.911638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.911676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.911832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.911849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.912113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.912147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.912380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.912414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.912627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.912645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.912906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.913157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.913193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.913482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.913517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.913815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.913850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.914151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.504 [2024-11-07 10:55:14.914186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.504 qpair failed and we were unable to recover it. 00:26:47.504 [2024-11-07 10:55:14.914305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.914338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.914558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.914576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.914798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.915004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.915038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.915200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.915235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.915556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.915593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.915808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.915827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.916000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.916036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.916204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.916238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.916528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.916547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.916740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.916775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.917056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.917091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.917324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.917358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.917511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.917547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.917832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.917867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.918147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.918182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.918317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.918335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.918516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.918552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.918758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.918792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.919052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.919087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.919279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.919296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.919447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.919465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.919632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.919649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.919765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.919799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.920007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.920043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.920311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.920347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.920543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.920579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.920848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.920883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.921212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.921422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.921481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.921767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.505 [2024-11-07 10:55:14.921785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.505 qpair failed and we were unable to recover it. 00:26:47.505 [2024-11-07 10:55:14.921888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.921906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.922167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.922202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.922429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.922474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.922741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.922775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.922991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.923024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.923213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.923390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.923426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.923719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.923754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.924079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.924115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.924332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.924366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.924658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.924695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.924906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.924941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.925135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.925170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.925460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.925591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.925608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.925829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.925846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.926000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.926016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.926258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.926293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.926590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.926626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.926829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.926847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.927003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.927020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.927275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.927310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.927462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.927497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.927808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.927843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.928121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.928156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.928450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.928486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.928759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.928794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.929087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.929122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.929384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.929418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.929723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.929758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.929900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.929935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.930086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.930122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.930331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.930366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.930573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.930592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.930773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.506 [2024-11-07 10:55:14.930808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.506 qpair failed and we were unable to recover it. 00:26:47.506 [2024-11-07 10:55:14.931061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.931096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.931360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.931671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.931706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.931844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.931879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.932091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.932125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.932318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.932361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.932627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.932645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.932821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.932839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.932993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.933027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.933314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.933350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.933630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.933664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.933864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.933881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.934041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.934077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.934221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.934255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.934530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.934567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.934875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.934892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.935093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.935111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.935355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.935373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.935549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.935567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.935834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.936005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.936022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.936256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.936292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.936496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.936514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.936688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.936723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.936861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.936895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.937178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.937213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.937404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.937448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.937601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.937620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.937849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.937884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.938131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.938166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.938382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.938417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.938664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.938701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.938914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.938948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.939091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.939126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.507 [2024-11-07 10:55:14.939335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.507 [2024-11-07 10:55:14.939370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.507 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.939538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.939580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.939805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.939822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.939929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.939948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.940098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.940115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.940280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.940298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.940530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.940566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.940770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.940788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.940955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.940990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.941281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.941461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.941496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.941777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.941795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.941964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.941982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.942176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.942362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.942398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.942672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.942709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.942834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.942868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.943099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.943135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.943259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.943293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.943524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.943559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.943767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.943802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.944083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.944118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.944266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.944284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.944461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.944479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.944636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.944673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.944939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.944974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.945197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.945233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.945354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.945388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.945628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.945666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.945844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.945862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.945965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.946002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.946156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.946192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.946418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.946445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.946665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.946880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.946915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.508 [2024-11-07 10:55:14.947053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.508 [2024-11-07 10:55:14.947089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.508 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.947312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.947346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.947539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.947575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.947789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.947806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.947977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.948019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.948238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.948274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.948561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.948580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.948747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.948783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.948977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.949012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.949225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.949261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.949447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.949466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.949643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.949679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.949873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.949907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.950102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.950137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.950272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.950307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.950517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.950535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.950675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.950711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.950923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.950959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.951168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.951203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.951518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.951553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.951704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.951740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.951971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.952005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.952195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.952232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.952448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.952467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.952651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.952686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.952941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.952975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.953181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.953215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.953480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.953515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.953724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.953759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.953947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.953981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.954262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.954305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.954476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.954497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.954659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.954693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.954977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.955011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.955219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.509 [2024-11-07 10:55:14.955254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.509 qpair failed and we were unable to recover it. 00:26:47.509 [2024-11-07 10:55:14.955457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.955474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.955649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.955684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.955882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.956024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.956060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.956207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.956241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.956430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.956477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.956602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.956636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.956848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.956883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.957019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.957053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.957269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.957304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.957459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.957494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.957611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.957854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.957888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.958119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.958154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.958271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.958305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.958485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.958503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.958752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.958788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.959003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.959037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.959325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.959360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.959562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.959597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.959734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.959752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.959899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.959917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.960894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.960989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.961006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.961161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.961178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.961290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.510 [2024-11-07 10:55:14.961306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.510 qpair failed and we were unable to recover it. 00:26:47.510 [2024-11-07 10:55:14.961492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.961510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.961616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.961633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.961833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.962018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.962053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.962243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.962277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.962643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.962662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.962862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.962901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.963880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.963893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.964061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.964093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.964309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.964342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.964551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.964587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.964798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.964831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.964962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.964995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.965117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.965152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.965288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.965331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.965471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.965505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.965728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.965762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.966045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.966078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.966218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.966253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.966465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.966500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.966629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.966642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.966861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.966894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.967110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.967143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.967333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.967367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.967583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.967618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.967808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.967842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.968049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.968082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.968309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.968341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.968609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.968623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.511 [2024-11-07 10:55:14.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.511 [2024-11-07 10:55:14.968839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.511 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.969040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.969073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.969345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.969378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.969602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.969637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.969831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.969844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.970084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.970097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.970308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.970342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.970550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.970583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.970791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.970826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.971071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.971374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.971414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.971634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.971647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.971865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.971877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.972086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.972098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.972308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.972321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.972600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.972635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.972786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.972819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.973099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.973133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.973413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.973710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.973723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.973904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.973916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.974023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.974056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.974271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.974305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.974606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.974651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.974816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.974828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.974989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.975029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.975287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.975320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.975507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.975541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.975772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.975805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.976064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.976098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.976298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.976332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.976605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.976762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.976795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.977050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.977085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.977342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.512 [2024-11-07 10:55:14.977374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.512 qpair failed and we were unable to recover it. 00:26:47.512 [2024-11-07 10:55:14.977652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.977665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.977875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.977888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.978055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.978068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.978279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.978292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.978505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.978539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.978725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.978758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.978952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.978985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.979239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.979526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.979539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.979813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.979846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.980061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.980094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.980358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.980392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.980735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.981001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.981034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.981332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.981365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.981644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.981679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.981942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.981975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.982200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.982234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.982473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.982514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.982670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.982683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.982910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.982923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.983154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.983187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.983337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.983371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.983667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.983701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.983865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.983898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.984155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.984188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.984456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.984490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.984645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.984679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.984989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.985022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.985281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.985315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.985542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.985593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.985816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.985829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.986061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.986074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.986167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.513 [2024-11-07 10:55:14.986180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.513 qpair failed and we were unable to recover it. 00:26:47.513 [2024-11-07 10:55:14.986357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.986391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.986623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.986659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.986796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.986809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.987045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.987058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.987220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.987232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.987451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.987464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.987642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.987655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.987899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.987933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.988201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.988234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.988437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.988557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.988569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.988763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.988777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.988988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.989002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.989087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.989099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.989320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.989353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.989628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.989937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.989971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.990180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.990214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.990506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.990541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.990745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.990779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.991004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.991233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.991266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.991550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.991585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.991854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.991889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.992161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.992332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.992366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.992595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.992631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.992820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.992833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.993084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.993117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.993354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.993387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.993661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.993675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.993846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.993858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.994020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.994053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.994265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.994298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.514 qpair failed and we were unable to recover it. 00:26:47.514 [2024-11-07 10:55:14.994582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.514 [2024-11-07 10:55:14.994617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.994882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.994915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.995218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.995257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.995561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.995595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.995888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.995901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.996928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.997221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.997255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.997562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.997598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.997818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.997830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.998009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.998325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.998612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.998625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.998825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.998859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.999068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.999101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.999368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.999402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:14.999707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:14.999786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.000037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.000076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.000369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.000402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.000696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.000741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.000845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.000861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.001047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.001064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.001250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.001288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.001460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.001496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.001711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.001745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.001950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.001966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.002141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.515 [2024-11-07 10:55:15.002174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.515 qpair failed and we were unable to recover it. 00:26:47.515 [2024-11-07 10:55:15.002389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.002422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.002729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.002763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.002894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.002927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.003221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.003502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.003538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.003759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.003771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.003990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.004023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.004281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.004314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.004539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.004575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.004836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.004870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.005179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.005212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.005482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.005524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.005814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.005848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.006150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.006183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.006456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.006491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.006777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.006789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.006932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.006944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.007120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.007154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.007308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.007341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.007624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.007658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.007944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.007978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.008197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.008231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.008518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.008553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.008842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.008876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.009155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.009188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.009465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.009501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.009797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.009831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.010024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.010057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.010191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.010226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.010444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.010747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.010780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.011068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.011102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.011403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.011449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.011713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.011747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.516 qpair failed and we were unable to recover it. 00:26:47.516 [2024-11-07 10:55:15.011893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.516 [2024-11-07 10:55:15.011926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.012209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.012242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.012458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.012493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.012703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.012736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.012953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.012988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.013300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.013333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.013612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.013648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.013929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.013943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.014101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.014114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.014358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.014371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.014605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.014618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.014878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.014891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.015077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.015089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.015315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.015349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.015611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.015645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.015873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.015907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.016190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.016223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.016360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.016393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.016665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.016678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.016779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.016812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.017070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.017103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.017408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.017456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.017658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.017670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.017908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.017942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.018123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.018241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.018276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.018510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.018545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.018677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.018907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.018920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.019073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.019086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.019311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.019345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.019636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.019670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.019950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.019962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.020222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.020256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.517 [2024-11-07 10:55:15.020469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.517 [2024-11-07 10:55:15.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.517 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.020680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.020692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.020939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.021279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.021311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.021584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.021634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.021717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.021730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.021951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.021984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.022122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.022156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.022345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.022378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.022652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.022687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.022911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.022950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.023155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.023188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.023400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.023413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.023615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.023650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.023972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.024253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.024285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.024546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.024582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.024725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.024759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.025041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.025073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.025335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.025368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.025679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.025694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.025884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.025917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.026131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.026164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.026393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.026426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.026721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.026755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.026935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.026948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.027189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.027223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.027485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.027519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.027718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.027732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.027894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.027927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.028233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.028266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.028550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.028586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.028873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.028907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.029188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.029222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.029503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.518 [2024-11-07 10:55:15.029538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.518 qpair failed and we were unable to recover it. 00:26:47.518 [2024-11-07 10:55:15.029744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.029777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.030037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.030071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.030373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.030406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.030718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.030752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.031036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.031070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.031274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.031308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.031596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.031633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.031908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.031921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.032166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.032421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.032461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.032746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.032781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.033067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.033100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.033239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.033533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.033569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.033859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.033891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.034168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.034207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.034464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.034498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.034770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.034804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.035094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.035127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.035336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.035368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.035663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.035698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.035903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.035937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.036228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.036262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.036454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.036489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.036803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.036837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.037044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.037077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.037355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.037389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.037670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.037704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.037991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.038025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.038306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.038340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.038608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.038642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.038846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.038880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.039128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.039162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.039315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.519 [2024-11-07 10:55:15.039347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.519 qpair failed and we were unable to recover it. 00:26:47.519 [2024-11-07 10:55:15.039550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.039584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.039786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.039821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.040104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.040136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.040323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.040356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.040553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.040587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.040792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.040825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.041108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.041140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.041355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.041388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.041637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.041685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.041864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.041877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.041999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.042033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.042293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.042326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.042532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.042566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.042844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.042877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.043068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.043101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.043314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.043346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.043529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.043542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.043726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.044020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.044053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.044293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.044327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.044607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.044643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.044912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.044952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.045153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.045186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.045386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.045425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.045649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.045663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.045903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.045932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.046197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.046230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.046542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.046577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.046863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.046896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.047102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.047135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.047343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.520 [2024-11-07 10:55:15.047376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.520 qpair failed and we were unable to recover it. 00:26:47.520 [2024-11-07 10:55:15.047588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.047623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.047829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.047842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.048061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.048094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.048443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.048714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.048748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.048966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.048999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.049279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.049311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.049513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.049547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.049836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.049869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.050080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.050114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.050326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.050360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.050499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.050533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.050831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.050864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.051032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.051044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.051263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.051296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.051576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.051610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.051896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.052175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.052188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.052289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.052322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.052582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.052616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.052896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.052929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.053216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.053250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.053448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.053484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.053694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.053726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.053920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.053953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.054186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.054219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.054527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.054562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.054768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.054991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.055025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.055351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.055384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.055603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.055643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.055933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.055967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.056247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.056288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.056384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.056396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.521 [2024-11-07 10:55:15.056637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.521 [2024-11-07 10:55:15.056671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.521 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.056920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.056953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.057082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.057115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.057325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.057359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.057581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.057615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.057902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.057936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.058213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.058226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.058397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.058410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.058593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.058912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.058947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.059232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.059265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.059395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.059429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.059633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.059668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.059953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.059986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.060233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.060267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.060551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.060597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.060760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.060773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.060929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.060962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.061221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.061254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.061544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.061580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.061794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.061827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.062056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.062089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.062346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.062379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.062686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.062721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.062946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.062980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.063116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.063149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.063342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.063376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.063671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.063972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.064006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.064189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.064202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.064440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.064453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.064616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.064629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.064849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.064882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.065165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.065198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.065480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.522 [2024-11-07 10:55:15.065516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.522 qpair failed and we were unable to recover it. 00:26:47.522 [2024-11-07 10:55:15.065801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.065834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.066053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.066091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.066281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.066315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.066547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.066580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.066791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.066825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.067105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.067117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.067382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.067415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.067743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.067778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.068063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.068097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.068250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.068284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.068474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.068508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.068702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.068736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.068916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.068928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.069086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.069349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.069382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.069725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.069760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.069907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.069941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.070146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.070180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.070492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.070505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.070769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.070803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.071088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.071122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.071334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.071368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.071603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.071637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.071921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.071934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.072072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.072084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.072251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.072494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.072528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.072736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.072768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.072993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.073006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.073254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.073288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.073446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.073480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.073758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.073795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.074053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.074086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.074345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.523 [2024-11-07 10:55:15.074380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.523 qpair failed and we were unable to recover it. 00:26:47.523 [2024-11-07 10:55:15.074676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.074710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.074978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.074992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.075234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.075267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.075515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.075550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.075795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.075829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.076137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.076170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.076429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.076474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.076706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.076747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.077029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.077062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.077250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.077284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.077551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.077586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.077788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.077821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.078011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.078044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.078324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.078357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.078564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.078599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.078833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.078866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.079161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.079352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.079633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.079667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.079893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.079927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.080114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.080127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.080378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.080412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.080710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.080743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.081008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.081021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.081261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.081416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.081460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.081799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.082109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.082143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.082365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.082398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.082648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.082687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.082847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.082860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.083107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.083119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.083295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.083328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.083604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.524 [2024-11-07 10:55:15.083640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.524 qpair failed and we were unable to recover it. 00:26:47.524 [2024-11-07 10:55:15.083854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.083888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.084096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.084129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.084387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.084449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.084645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.084678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.084961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.084996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.085260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.085293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.085599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.085632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.085940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.085974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.086248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.086260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.086442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.086455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.086619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.086651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.086862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.086895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.087114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.087148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.087343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.087382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.087599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.087634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.087918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.087952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.088210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.088243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.088522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.088557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.088865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.088898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.089159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.089467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.089480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.089696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.089708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.089954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.090105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.090118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.090267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.090301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.090585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.090619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.090832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.090867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.091010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.091044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.091311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.091343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.091606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.091640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.091936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.525 [2024-11-07 10:55:15.091969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.525 qpair failed and we were unable to recover it. 00:26:47.525 [2024-11-07 10:55:15.092244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.092574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.092612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.092899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.092945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.093121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.093273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.093286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.093536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.093570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.093800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.093813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.093895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.093907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.094098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.094111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.094452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.094501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.094713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.094752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.094959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.094995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.095192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.095230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.095490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.095528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.095838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.095874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.096192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.096209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.096379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.096397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.096619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.096637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.096846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.096882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.097054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.097088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.097296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.097330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.097594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.097631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.097898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.097934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.098229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.098264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.098492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.098528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.098859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.098895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.099184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.099218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.099425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.099646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.099679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.099962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.099997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.100283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.100317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.100587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.100623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.100891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.100924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.101084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.101119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.526 qpair failed and we were unable to recover it. 00:26:47.526 [2024-11-07 10:55:15.101305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.526 [2024-11-07 10:55:15.101340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.101487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.101521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.101774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.101852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.102166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.102203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.102457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.102492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.102704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.102737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.102946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.102981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.103260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.103294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.103589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.103624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.103911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.103945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.104231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.104265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.104462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.104516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.104660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.104694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.104978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.105248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.105261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.105497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.105515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.105602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.105615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.105826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.105859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.106140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.106173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.106450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.106775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.106810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.107001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.107034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.107310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.107344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.107554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.107589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.107793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.107827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.108037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.108071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.108331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.108364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.108519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.108554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.108703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.108718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.108953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.108988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.109299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.109333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.109580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.109614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.109814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.109848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.110132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.110165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.110456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.527 [2024-11-07 10:55:15.110491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.527 qpair failed and we were unable to recover it. 00:26:47.527 [2024-11-07 10:55:15.110750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.110785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.110932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.110965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.111189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.111222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.111415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.111460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.111718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.111752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.111970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.112004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.112258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.112292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.112567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.112817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.112850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.113158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.113190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.113319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.113352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.113636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.113670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.113884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.113918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.114221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.114234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.114399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.114412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.114646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.114658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.114781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.114815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.115017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.115051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.115310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.115342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.115582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.115617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.115898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.115932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.116196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.116209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.116365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.116378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.116531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.116544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.116775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.116808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.117060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.117095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.117286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.117320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.117553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.117587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.117784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.117984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.118018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.118218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.118252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.118470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.118505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.118651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.118686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.118889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.118921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.528 qpair failed and we were unable to recover it. 00:26:47.528 [2024-11-07 10:55:15.119210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.528 [2024-11-07 10:55:15.119224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.119443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.119472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.119636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.119649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.119886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.119918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.120156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.120189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.120400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.120451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.120739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.120773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.120895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.120929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.121149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.121182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.121500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.121514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.121660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.121673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.121765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.121802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.122014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.122047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.122276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.122317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.122442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.122455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.122639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.122653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.122819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.122833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.123051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.123285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.123318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.123618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.123652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.123868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.123881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.124029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.124042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.124289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.124322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.124555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.124590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.124752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.124787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.124979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.125013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.125156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.125189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.125475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.125511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.125771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.125805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.126140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.126173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.126460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.126495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.126779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.126813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.127003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.127037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.127314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.127349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.529 qpair failed and we were unable to recover it. 00:26:47.529 [2024-11-07 10:55:15.127639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.529 [2024-11-07 10:55:15.127673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.127890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.127925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.128133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.128167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.128431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.128476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.128739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.128772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.128960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.128996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.129288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.129325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.129619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.129656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.129916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.129950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.130222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.130256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.130466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.130501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.130715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.130750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.130923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.130937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.131090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.131122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.131351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.131384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.131706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.131742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.132033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.132067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.132342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.132376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.132676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.132712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.132925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.132970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.133092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.133105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.133345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.133378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.133532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.133567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.133772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.133806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.134012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.134045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.134325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.134359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.134595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.134630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.134909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.134943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.135166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.135200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.135402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.135446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.135581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.135616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.530 qpair failed and we were unable to recover it. 00:26:47.530 [2024-11-07 10:55:15.135832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.530 [2024-11-07 10:55:15.135846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.136013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.136046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.136263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.136299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.136612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.136821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.136856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.137122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.137134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.137280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.137292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.137534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.137569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.137769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.137783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.137957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.137992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.138305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.138339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.138476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.138511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.138773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.138807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.138954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.138989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.139264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.139450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.139462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.139722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.139736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.139984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.139998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.140231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.140250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.531 [2024-11-07 10:55:15.140417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.531 [2024-11-07 10:55:15.140446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.531 qpair failed and we were unable to recover it. 00:26:47.817 [2024-11-07 10:55:15.144475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.817 [2024-11-07 10:55:15.144545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.817 qpair failed and we were unable to recover it. 00:26:47.817 [2024-11-07 10:55:15.144877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.144915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.145166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.145184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.145351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.145367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.145558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.145575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.145763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.145778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.145949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.145964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.146133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.146149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.146371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.146400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.146545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.146726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.146742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.146987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.147002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.147121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.147138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.147303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.147319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.147589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.147608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.147799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.147821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.148057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.148079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.148257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.148544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.148569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.148697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.148718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.148880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.148901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.149176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.149193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.149447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.149471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.149697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.149918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.149931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.150113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.150126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.150319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.150352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.150567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.818 [2024-11-07 10:55:15.150605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.818 qpair failed and we were unable to recover it. 00:26:47.818 [2024-11-07 10:55:15.150872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.150907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.151200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.151509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.151546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.151839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.151872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.152176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.152210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.152476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.152513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.152800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.152834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.153044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.153079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.153279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.153314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.153586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.153623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.153839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.153873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.154080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.154118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.154401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.154415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.154647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.154662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.154929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.154942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.155197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.155229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.155466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.155505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.155836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.155870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.156056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.156090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.156291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.156324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.156472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.156515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.156672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.156707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.156863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.156898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.157117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.157152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.157452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.157487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.157774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.157807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.158071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.158083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.158181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.158194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.158425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.158478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.158740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.819 [2024-11-07 10:55:15.158774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.819 qpair failed and we were unable to recover it. 00:26:47.819 [2024-11-07 10:55:15.159048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.159081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.159287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.159323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.159586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.159623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.159878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.159891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.160073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.160085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.160334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.160367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.160606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.160641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.160920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.160953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.161159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.161428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.161450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.161695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.161729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.162027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.162060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.162320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.162333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.162597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.162636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.162901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.162935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.163223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.163257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.163566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.163602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.163829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.163872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.164120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.164133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.164334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.164347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.164559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.164572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.164677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.164921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.164955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.165165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.165199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.165470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.165506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.165695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.165728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.165932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.165966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.166228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.166240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.166346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.166358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.166522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.166536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.166776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.166813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.167038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.820 qpair failed and we were unable to recover it. 00:26:47.820 [2024-11-07 10:55:15.167273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.820 [2024-11-07 10:55:15.167307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.167545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.167582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.167775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.167809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.168071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.168105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.168409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.168421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.168669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.168684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.168908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.168942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.169182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.169216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.169547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.169582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.169774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.169807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.170067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.170112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.170328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.170340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.170499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.170512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.170686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.170720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.170922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.170956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.171164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.171211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.171448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.171461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.171701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.171714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.172129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.172142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.172389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.172423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.172672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.172706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.172947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.172959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.173133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.173167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.173399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.173723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.173759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.173977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.821 [2024-11-07 10:55:15.174810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.821 qpair failed and we were unable to recover it. 00:26:47.821 [2024-11-07 10:55:15.174976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.174989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.175088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.175121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.175355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.175390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.175692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.175727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.175901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.176168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.176202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.176417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.176476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.176740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.176933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.176966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.177103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.177117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.177348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.177361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.177450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.177463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.177724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.177757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.177945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.177979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.178290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.178324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.178606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.178642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.178801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.178836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.179050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.179085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.179225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.179238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.179399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.179411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.179530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.179769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.179803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.180085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.180118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.180312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.180324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.180591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.180626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.180917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.180951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.181253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.181285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.181489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.181526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.181656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.822 [2024-11-07 10:55:15.181691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.822 qpair failed and we were unable to recover it. 00:26:47.822 [2024-11-07 10:55:15.181977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.182011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.182157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.182191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.182328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.182363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.182564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.182600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.182809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.182844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.183057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.183071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.183308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.183321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.183559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.183594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.183800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.183834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.184952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.184985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.185171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.185205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.185515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.185551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.185834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.185873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.186078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.186091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.186272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.186305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.186496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.186529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.186722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.186756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.186972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.187215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.187228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.187464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.187477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.187552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.187564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.823 [2024-11-07 10:55:15.187789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.823 [2024-11-07 10:55:15.187802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.823 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.187979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.187992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.188171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.188204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.188518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.188552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.188696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.188729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.189018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.189052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.189312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.189325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.189557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.189570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.189783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.189797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.189949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.190932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.190965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.191226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.191273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.191524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.191554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.191680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.191713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.191938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.191973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.192267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.192280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.192556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.192598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.192861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.192895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.193106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.193139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.193283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.193317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.193526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.193560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.193683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.193717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.193853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.193886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.194139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.194151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.194311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.194324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.194439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.194454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.824 [2024-11-07 10:55:15.194550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.824 [2024-11-07 10:55:15.194562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.824 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.194741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.194774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.194921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.195176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.195210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.195409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.195454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.195659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.195693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.196003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.196036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.196240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.196536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.196571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.196765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.196798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.197757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.197790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.198056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.198090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.198295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.198329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.198620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.198656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.198924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.198958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.199131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.199143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.199307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.199340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.199532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.199566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.199826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.199859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.200113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.200346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.200380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.200606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.200640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.200785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.200820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.201080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.201113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.201350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.201363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.201522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.201535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.825 [2024-11-07 10:55:15.201622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.825 [2024-11-07 10:55:15.201634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.825 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.201775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.201788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.201901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.201934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.202145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.202178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.202369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.202402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.202615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.202693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.202858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.202895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.203026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.203060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.203259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.203282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.203535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.203570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.203787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.203820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.204009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.204025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.204194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.204227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.204518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.204553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.204754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.204788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.204993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.205218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.205251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.205441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.205459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.205615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.205633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.205878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.206121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.206153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.206365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.206398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.206600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.206634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.206842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.206874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.207082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.207115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.207318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.207351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.207651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.207785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.208026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.208058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.208362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.208397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.208616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.208652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2841845 Killed "${NVMF_APP[@]}" "$@" 00:26:47.826 [2024-11-07 10:55:15.208863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.826 [2024-11-07 10:55:15.208898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.826 qpair failed and we were unable to recover it. 00:26:47.826 [2024-11-07 10:55:15.209182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.209215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.209472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.209490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:47.827 [2024-11-07 10:55:15.209602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.209619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.209731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.209748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.209895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.209914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:47.827 [2024-11-07 10:55:15.210070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.827 [2024-11-07 10:55:15.210313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.210511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.827 [2024-11-07 10:55:15.210634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.210769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.827 [2024-11-07 10:55:15.210969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.210988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.211145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.211162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.211381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.211397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.211508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.211525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.211691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.211708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.211825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.211841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.212954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.212970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.827 [2024-11-07 10:55:15.213859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.827 [2024-11-07 10:55:15.213876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.827 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.214871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.214883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.215902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.215914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.216914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.216926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2842575 00:26:47.828 [2024-11-07 10:55:15.217897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.217983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.217995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 [2024-11-07 10:55:15.218062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.828 [2024-11-07 10:55:15.218075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.828 qpair failed and we were unable to recover it. 00:26:47.828 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2842575 00:26:47.828 [2024-11-07 10:55:15.218137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.218245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2842575 ']' 00:26:47.829 [2024-11-07 10:55:15.218502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.218658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.218747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.829 [2024-11-07 10:55:15.218946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.218960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:47.829 [2024-11-07 10:55:15.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.219156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.219377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.829 [2024-11-07 10:55:15.219391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.219565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.219658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.219672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:47.829 [2024-11-07 10:55:15.219829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.219842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.829 [2024-11-07 10:55:15.220023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.220913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.220926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.829 [2024-11-07 10:55:15.221762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.829 [2024-11-07 10:55:15.221775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.829 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.221847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.221860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.222946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.222959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.223830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.224896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.225835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.830 [2024-11-07 10:55:15.225848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.830 qpair failed and we were unable to recover it. 00:26:47.830 [2024-11-07 10:55:15.226066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.226178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.226294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.226462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.226714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.226796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.226808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.227831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.227843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.228832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.228848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.229950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.229966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.831 [2024-11-07 10:55:15.230763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.831 qpair failed and we were unable to recover it. 00:26:47.831 [2024-11-07 10:55:15.230979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.230996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.231233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.231393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.231566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.231779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.231890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.231986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.232822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.232985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.233982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.233998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.234074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.234090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.234184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.234200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.234357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.234374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.234608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.234790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.234833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.832 qpair failed and we were unable to recover it. 00:26:47.832 [2024-11-07 10:55:15.235850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.832 [2024-11-07 10:55:15.235863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.235972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.235985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.236865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.236878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.237870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.237882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.238743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.238996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.239260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.239430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.239531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.239643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.239901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.239913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.240078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.240255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.240334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.240424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.833 [2024-11-07 10:55:15.240542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.833 qpair failed and we were unable to recover it. 00:26:47.833 [2024-11-07 10:55:15.240685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.240697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.240864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.241977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.241989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.242875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.242886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.243969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.243982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.834 [2024-11-07 10:55:15.244969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.834 [2024-11-07 10:55:15.244981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.834 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.245866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.245878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.246893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.246905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.247915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.248938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.248950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.249102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.249113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.249193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.249205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.249400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.249413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.249569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.835 [2024-11-07 10:55:15.249581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.835 qpair failed and we were unable to recover it. 00:26:47.835 [2024-11-07 10:55:15.249782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.249794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.249946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.249958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.250168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.250179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.250254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.250266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.250444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.250458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.250637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.250649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.250796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.250809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.251928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.252939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.252954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.253899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.253911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.254046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.254058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.254140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.254152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.254287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.254299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.254443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.254456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.836 [2024-11-07 10:55:15.254677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.836 [2024-11-07 10:55:15.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.836 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.254840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.254852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.254988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.255959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.255972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.256904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.256916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.257814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.257825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.258877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.258889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.259069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.259081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.837 [2024-11-07 10:55:15.259170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.837 [2024-11-07 10:55:15.259182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.837 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.259341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.259523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.259612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.259762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.259859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.260936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.260947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.261932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.261944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.262805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.262817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.838 [2024-11-07 10:55:15.263043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.838 [2024-11-07 10:55:15.263055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.838 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.263940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.263952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.264889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.265964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.265976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.266204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.266216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.266310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.266322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.266497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.266509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.266673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.266684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.266821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.266834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.267072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.267255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.267491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.267655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.839 [2024-11-07 10:55:15.267747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.839 qpair failed and we were unable to recover it. 00:26:47.839 [2024-11-07 10:55:15.267911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.267924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.268980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.268992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.269943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.269954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270130] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:26:47.840 [2024-11-07 10:55:15.270176] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.840 [2024-11-07 10:55:15.270305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.270952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.270964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.840 [2024-11-07 10:55:15.271568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.840 [2024-11-07 10:55:15.271585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.840 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.271684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.271697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.271836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.271849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.271985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.271997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.272815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.272827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.273941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.273953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.274918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.274997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.275009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.275171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.275183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.275416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.275428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.275601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.275613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.275746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.276005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.276103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.276116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.276295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.841 [2024-11-07 10:55:15.276307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.841 qpair failed and we were unable to recover it. 00:26:47.841 [2024-11-07 10:55:15.276443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.276456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.276557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.276569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.276742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.276754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.276916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.276928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.277863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.277875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.278961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.278973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.279978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.279990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.280063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.280075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.280212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.280224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.280377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.280388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.280618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.280862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.280875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.281109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.281120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.281267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.281279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.281448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.281461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.281623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.281635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.281776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.281788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.282012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.282023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.842 [2024-11-07 10:55:15.282121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.842 [2024-11-07 10:55:15.282133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.842 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.282852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.282864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.283970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.283982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.284859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.284871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.285916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.285929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.843 qpair failed and we were unable to recover it. 00:26:47.843 [2024-11-07 10:55:15.286032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.843 [2024-11-07 10:55:15.286044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.286932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.286945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.287985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.287997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.288926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.288938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.289939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.289951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.844 [2024-11-07 10:55:15.290021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.844 [2024-11-07 10:55:15.290033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.844 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.290963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.290974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.291927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.291939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.292865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.292877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.293007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.293175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.293187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.293269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.293281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.293428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.293459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.845 qpair failed and we were unable to recover it. 00:26:47.845 [2024-11-07 10:55:15.293542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.845 [2024-11-07 10:55:15.293554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.293639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.293651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.293797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.293809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.293960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.293973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.294942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.294954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.295922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.295934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.296930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.296942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.297087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.297232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.297345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.297465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.846 [2024-11-07 10:55:15.297561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.846 qpair failed and we were unable to recover it. 00:26:47.846 [2024-11-07 10:55:15.297707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.297719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.297791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.297803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.297939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.297954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.298930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.298942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.299838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.299988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.300912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.300923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.301063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.301075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.301175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.301187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.301268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.301280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.301482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.847 [2024-11-07 10:55:15.301494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.847 qpair failed and we were unable to recover it. 00:26:47.847 [2024-11-07 10:55:15.301590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.301602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.301679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.301761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.301772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.301901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.301913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.302907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.302921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.303885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.303898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.304855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.304988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.305887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.305899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.848 [2024-11-07 10:55:15.306128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.848 [2024-11-07 10:55:15.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.848 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.306829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.306840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.307957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.307969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.308933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.308945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.849 qpair failed and we were unable to recover it. 00:26:47.849 [2024-11-07 10:55:15.309712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.849 [2024-11-07 10:55:15.309725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.309788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.309800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.309997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.310851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.310863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.311884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.311896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.312912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.312925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.313905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.313917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.314043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.314055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.314209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.314220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.314295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.314310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.850 qpair failed and we were unable to recover it. 00:26:47.850 [2024-11-07 10:55:15.314388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.850 [2024-11-07 10:55:15.314400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.314463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.314475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.314555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.314567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.314666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.314677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.314822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.314834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.314979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.314990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.315981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.315993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.316974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.316986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.317821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.317833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.318066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.851 [2024-11-07 10:55:15.318078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.851 qpair failed and we were unable to recover it. 00:26:47.851 [2024-11-07 10:55:15.318215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.318976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.318988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.319868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.319879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.320918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.320930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.852 [2024-11-07 10:55:15.321707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.852 qpair failed and we were unable to recover it. 00:26:47.852 [2024-11-07 10:55:15.321858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.321870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.321946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.321958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.322925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.322936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.323892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.323904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.324979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.853 [2024-11-07 10:55:15.325775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.853 [2024-11-07 10:55:15.325787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.853 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.325985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.325996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.326842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.326854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.327958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.327981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.328944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.328959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.329852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.854 [2024-11-07 10:55:15.329867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.854 qpair failed and we were unable to recover it. 00:26:47.854 [2024-11-07 10:55:15.330002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.330969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.330981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.331840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.331994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.332860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.332872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.333917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.333929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.334077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.334089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.334217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.855 [2024-11-07 10:55:15.334229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.855 qpair failed and we were unable to recover it. 00:26:47.855 [2024-11-07 10:55:15.334300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.334380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.334536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.334800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.334949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.335958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.335970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.336949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.336960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.337090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.337102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.337184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.337196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.337268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.337280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.337373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.337384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.856 [2024-11-07 10:55:15.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.856 [2024-11-07 10:55:15.337488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.856 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.337573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.337585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.337721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.337733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.337811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.337823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.338877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.338889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.339891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.339903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.340987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.340998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.341075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.341086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.341163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.341174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.341245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.341257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.341322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.341333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.857 [2024-11-07 10:55:15.341483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.857 [2024-11-07 10:55:15.341495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.857 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.341721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.341733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.341865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.341956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.341968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.342924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.342935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.343841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.343999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.858 [2024-11-07 10:55:15.344920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.858 [2024-11-07 10:55:15.344931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.858 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.345886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.345898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.346875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.346886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.347916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.347990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.348956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.348972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.349068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.349084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.349175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.859 [2024-11-07 10:55:15.349190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.859 qpair failed and we were unable to recover it. 00:26:47.859 [2024-11-07 10:55:15.349282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.349297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.349443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.349459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.349618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.349634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.349702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.349717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.349870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.349891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.350795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.350810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:47.860 [2024-11-07 10:55:15.351822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.351837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.351997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.352169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.352385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.352547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.352637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.352905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.353985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.353997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.860 qpair failed and we were unable to recover it. 00:26:47.860 [2024-11-07 10:55:15.354217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.860 [2024-11-07 10:55:15.354228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.354915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.354927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.355952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.355969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.356044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.356060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.356207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.356223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.356447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.356463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.356606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.356622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.356861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.356877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.357928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.357939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.358104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.358117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.358266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.358278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.358378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.358391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.358461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.358473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.861 qpair failed and we were unable to recover it. 00:26:47.861 [2024-11-07 10:55:15.358610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.861 [2024-11-07 10:55:15.358622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.358766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.358778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.358859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.358871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.359978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.359990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.360918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.360930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.361803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.361815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.862 qpair failed and we were unable to recover it. 00:26:47.862 [2024-11-07 10:55:15.362700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.862 [2024-11-07 10:55:15.362712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.362804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.362816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.362910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.362922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.363946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.363958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.364918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.364997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.365985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.365996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.863 [2024-11-07 10:55:15.366832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.863 [2024-11-07 10:55:15.366844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.863 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.366907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.366919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.367953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.367965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.368902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.368915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.369947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.370073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.370084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.370158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.370169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.370367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.370378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.370525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.370537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.864 qpair failed and we were unable to recover it. 00:26:47.864 [2024-11-07 10:55:15.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.864 [2024-11-07 10:55:15.370694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.370769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.370781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.370856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.370868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.370948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.370960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.371909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.371920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.372948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.372959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.373955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.373966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.374969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.374981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.375186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.375198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.375260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.375272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.865 qpair failed and we were unable to recover it. 00:26:47.865 [2024-11-07 10:55:15.375346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.865 [2024-11-07 10:55:15.375358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.375445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.375458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.375525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.375536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.375593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.375604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.375673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.375684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.375819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.375831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.376930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.376997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.377835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.377846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.378846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.378858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.379928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.866 [2024-11-07 10:55:15.379940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.866 qpair failed and we were unable to recover it. 00:26:47.866 [2024-11-07 10:55:15.380007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.380987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.380999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.381963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.381975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.382913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.382925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.383922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.867 [2024-11-07 10:55:15.383934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.867 qpair failed and we were unable to recover it. 00:26:47.867 [2024-11-07 10:55:15.384026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.384916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.384933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.385871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.385886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.386946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.386962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.387940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.387956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.388913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.388928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.389928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.389940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.868 [2024-11-07 10:55:15.390024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.868 [2024-11-07 10:55:15.390035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.868 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.390939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.390954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.391887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.391899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.392039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.392051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.392210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.392224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.392388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.392403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.392653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.392667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.392872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.392886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.393881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.869 [2024-11-07 10:55:15.393911] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.869 [2024-11-07 10:55:15.393919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.869 [2024-11-07 10:55:15.393926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.869 [2024-11-07 10:55:15.393931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.869 [2024-11-07 10:55:15.393952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.393963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.394915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.394926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.395012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.395024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.395101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.395112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.395249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.395262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.395345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.395356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.869 [2024-11-07 10:55:15.395444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.869 [2024-11-07 10:55:15.395455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.869 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.395531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.395602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:47.870 [2024-11-07 10:55:15.395668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.395688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:47.870 [2024-11-07 10:55:15.395781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:47.870 [2024-11-07 10:55:15.395832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.395843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:47.870 [2024-11-07 10:55:15.395914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.395925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.395999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.396973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.396985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.397912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.397924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.398977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.398989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.870 [2024-11-07 10:55:15.399874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.870 qpair failed and we were unable to recover it. 00:26:47.870 [2024-11-07 10:55:15.399943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.399954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.400982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.400994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.401940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.401952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.402860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.402993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.403901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.403912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.404111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.404123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.871 [2024-11-07 10:55:15.404255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.871 [2024-11-07 10:55:15.404267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.871 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.404974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.404986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.405927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.405940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.872 [2024-11-07 10:55:15.406835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.872 qpair failed and we were unable to recover it. 00:26:47.872 [2024-11-07 10:55:15.406909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.406922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.407937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.407950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.408971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.408986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.409842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.409983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.410835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.410993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.411009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.411087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.411103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.411262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.873 [2024-11-07 10:55:15.411278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.873 qpair failed and we were unable to recover it. 00:26:47.873 [2024-11-07 10:55:15.411485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.411502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.411644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.411660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.411838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.411854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.411995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.412896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.412913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.413824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.413840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.414912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.414927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.415084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.415099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.415332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.415347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.415437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.415453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.415668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.415684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.874 [2024-11-07 10:55:15.415862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.874 [2024-11-07 10:55:15.415878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.874 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.416844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.416863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.417941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.417957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.418846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.418862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.419880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.419897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.420054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.420071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.420261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.420277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.420353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.875 [2024-11-07 10:55:15.420365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.875 qpair failed and we were unable to recover it. 00:26:47.875 [2024-11-07 10:55:15.420506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.420519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.420618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.420630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.420798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.420825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.420936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.420956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.421969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.421985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.422917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.422929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.423935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.423995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.424837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.424848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.425019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.876 [2024-11-07 10:55:15.425031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.876 qpair failed and we were unable to recover it. 00:26:47.876 [2024-11-07 10:55:15.425127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.425923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.425992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.426853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.426868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.427852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.427868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.428831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.877 [2024-11-07 10:55:15.428842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.877 qpair failed and we were unable to recover it. 00:26:47.877 [2024-11-07 10:55:15.429005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.429960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.429972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.430925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.430936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.431918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.431990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.878 [2024-11-07 10:55:15.432680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.878 qpair failed and we were unable to recover it. 00:26:47.878 [2024-11-07 10:55:15.432817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.432829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.432977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.432989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.433969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.433981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.434895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.434990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.435957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.435970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.436170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.436183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.436326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.436470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.436483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.436564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.436579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.879 qpair failed and we were unable to recover it. 00:26:47.879 [2024-11-07 10:55:15.436718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.879 [2024-11-07 10:55:15.436730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.436859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.436872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.437752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.437989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.438878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.438891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.439947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.439960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.440124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.440138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.440267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.440280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.880 [2024-11-07 10:55:15.440342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.880 [2024-11-07 10:55:15.440355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.880 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.440508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.440521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.440672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.440684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.440834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.440846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.441950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.441965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.442950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.442961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.443933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.443997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.444008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.444074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.444086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.444164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.444175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.881 [2024-11-07 10:55:15.444242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.881 [2024-11-07 10:55:15.444254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.881 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.444396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.444409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.444536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.444548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.444718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.444731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.444961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.444974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.445889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.445991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.446865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.446877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.447941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.447953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.448091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.448103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.882 qpair failed and we were unable to recover it. 00:26:47.882 [2024-11-07 10:55:15.448183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.882 [2024-11-07 10:55:15.448196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.448909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.449976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.449989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.450954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.450966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.451884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.451896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.883 qpair failed and we were unable to recover it. 00:26:47.883 [2024-11-07 10:55:15.452685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.883 [2024-11-07 10:55:15.452696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.452843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.452855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.452956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.453971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.453982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.454982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.454993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.455974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.455986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.884 [2024-11-07 10:55:15.456887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.884 [2024-11-07 10:55:15.456899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.884 qpair failed and we were unable to recover it. 00:26:47.885 [2024-11-07 10:55:15.457035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.885 [2024-11-07 10:55:15.457047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.885 qpair failed and we were unable to recover it. 00:26:47.885 [2024-11-07 10:55:15.457240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.885 [2024-11-07 10:55:15.457252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:47.885 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.457508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.457521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.457590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.457602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.457669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.457683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.457892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.457904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.457969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.457981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.458949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.458960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.459834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.459846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.460062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.460074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.460161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.165 [2024-11-07 10:55:15.460173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.165 qpair failed and we were unable to recover it. 00:26:48.165 [2024-11-07 10:55:15.460321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.460428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.460517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.460603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.460712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.460933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.460945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.461978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.461989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.462982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.462994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.463220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.463231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.463361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.463380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.463541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.463552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.463723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.463735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.463907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.463919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.166 [2024-11-07 10:55:15.464872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.166 [2024-11-07 10:55:15.464883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.166 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.465928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.465939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.466965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.466977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.467920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.467931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.468945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.468957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.167 [2024-11-07 10:55:15.469091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.167 [2024-11-07 10:55:15.469102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.167 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.469963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.469975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.470920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.470932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.471080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.471091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.471223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.471534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.471560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.471758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.471790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.471963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.471980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.472975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.472990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.473930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.473946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.474204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.168 [2024-11-07 10:55:15.474220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.168 qpair failed and we were unable to recover it. 00:26:48.168 [2024-11-07 10:55:15.474384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.474399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.474561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.474578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.474676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.474692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.474859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.474875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.474976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.474991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.475952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.475963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.476856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.476868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.477949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.477964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.478904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.478920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.479005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.169 [2024-11-07 10:55:15.479020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.169 qpair failed and we were unable to recover it. 00:26:48.169 [2024-11-07 10:55:15.479100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.479964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.479979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f896c000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.480858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.480869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.481938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.481950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.482940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.482952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.170 [2024-11-07 10:55:15.483122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.170 [2024-11-07 10:55:15.483134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.170 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.483341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.483352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.483501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.483513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.483595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.483607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.483753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.483765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.483907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.483919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.484910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.484924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.485867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.485999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.486702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:48.171 [2024-11-07 10:55:15.486843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.486855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.487022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.487033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 [2024-11-07 10:55:15.487096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.487108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.171 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:26:48.171 [2024-11-07 10:55:15.487256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.171 [2024-11-07 10:55:15.487268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.171 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.487398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.487410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.487492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.487506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.172 [2024-11-07 10:55:15.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.487739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:48.172 [2024-11-07 10:55:15.487828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.487840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.487976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.487988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.172 [2024-11-07 10:55:15.488131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.488296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.488381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.488547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.488712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.488864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.488876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.489799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.489811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.490868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.490880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.491038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.491134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.491315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.172 [2024-11-07 10:55:15.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.172 qpair failed and we were unable to recover it. 00:26:48.172 [2024-11-07 10:55:15.491566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.491578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.491656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.491667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.491809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.491820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.491964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.491975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.492951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.492962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.493973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.493989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.494867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.494995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.173 qpair failed and we were unable to recover it. 00:26:48.173 [2024-11-07 10:55:15.495902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.173 [2024-11-07 10:55:15.495913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.495976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.495987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.496873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.496883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.497904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.497923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.498845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.498859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.499062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.499077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.499319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.499334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.499430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.499451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.499527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.499640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.499655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.500502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.174 [2024-11-07 10:55:15.500533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.174 qpair failed and we were unable to recover it. 00:26:48.174 [2024-11-07 10:55:15.500690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.500703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.500908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.500919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.501964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.501975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.502988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.502999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.503869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.503879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.504010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.504020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.504079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.504089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.175 qpair failed and we were unable to recover it. 00:26:48.175 [2024-11-07 10:55:15.504163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.175 [2024-11-07 10:55:15.504173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.504963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.504973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.505885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.506939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.506949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.507024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.507119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.176 [2024-11-07 10:55:15.507129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.176 qpair failed and we were unable to recover it. 00:26:48.176 [2024-11-07 10:55:15.507194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.507947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.507959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.508925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.508992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.509957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.509968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.510034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.510044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.510119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.510132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.510209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.510219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.510285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.510295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.177 qpair failed and we were unable to recover it. 00:26:48.177 [2024-11-07 10:55:15.510362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.177 [2024-11-07 10:55:15.510373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.510514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.510526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.510607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.510617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.510694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.510705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.510772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.510782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.510852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.510863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.511899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.511911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.512923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.512934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.178 [2024-11-07 10:55:15.513505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.178 [2024-11-07 10:55:15.513517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.178 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.513583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.513671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.513682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.513768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.513780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.513862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.513872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.513946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.513957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.514845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.514855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.515949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.515959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.179 qpair failed and we were unable to recover it. 00:26:48.179 [2024-11-07 10:55:15.516630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.179 [2024-11-07 10:55:15.516643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.516726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.516737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.516885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.516896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.516971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.516981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.517930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.517940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.518985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.518996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.519101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.519112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.519189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.519199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.519279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.180 [2024-11-07 10:55:15.519290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.180 qpair failed and we were unable to recover it. 00:26:48.180 [2024-11-07 10:55:15.519351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.519944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.519954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.520970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.520981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.521981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.521992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.522066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.522077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.522140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.181 [2024-11-07 10:55:15.522151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.181 qpair failed and we were unable to recover it. 00:26:48.181 [2024-11-07 10:55:15.522222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.522922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.522989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.182 [2024-11-07 10:55:15.523873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.523956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.523967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:48.182 [2024-11-07 10:55:15.524272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.182 [2024-11-07 10:55:15.524620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 [2024-11-07 10:55:15.524858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.182 [2024-11-07 10:55:15.524870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.182 qpair failed and we were unable to recover it. 00:26:48.182 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.182 [2024-11-07 10:55:15.524928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.524940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.525931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.525941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.526966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.526981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.183 [2024-11-07 10:55:15.527816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.183 qpair failed and we were unable to recover it. 00:26:48.183 [2024-11-07 10:55:15.527887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.527898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.527965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.527977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.528975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.528985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.529935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.529947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.184 [2024-11-07 10:55:15.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.184 qpair failed and we were unable to recover it. 00:26:48.184 [2024-11-07 10:55:15.530692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.530707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.530783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.530798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.530882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.530897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.530969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.530984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.531982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.531994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.532918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.532928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.533061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.533071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.533143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.533220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.185 [2024-11-07 10:55:15.533230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.185 qpair failed and we were unable to recover it. 00:26:48.185 [2024-11-07 10:55:15.533383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.533968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.533983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.534918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.534933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.535973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.535988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.186 [2024-11-07 10:55:15.536742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.186 qpair failed and we were unable to recover it. 00:26:48.186 [2024-11-07 10:55:15.536809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.536819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.536952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.536963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.537913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.537927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.538965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.538976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.187 [2024-11-07 10:55:15.539743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.187 qpair failed and we were unable to recover it. 00:26:48.187 [2024-11-07 10:55:15.539803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.539813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.539948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.539958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.540955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.540966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.541973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.541987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.542934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.542944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.543006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.543017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.543080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.188 [2024-11-07 10:55:15.543090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.188 qpair failed and we were unable to recover it. 00:26:48.188 [2024-11-07 10:55:15.543154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.543949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.543960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.544993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.545909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.545924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.189 [2024-11-07 10:55:15.546784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.189 qpair failed and we were unable to recover it. 00:26:48.189 [2024-11-07 10:55:15.546857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.546872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.546948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.546973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.547983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.547998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.548825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.548995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.549959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.549974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.550199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.550354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.550555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.190 [2024-11-07 10:55:15.550719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.190 qpair failed and we were unable to recover it. 00:26:48.190 [2024-11-07 10:55:15.550806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.550821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.550898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.550913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.551943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.551957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.552956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.553983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.553998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.554947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.554962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.555109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.191 [2024-11-07 10:55:15.555127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.191 qpair failed and we were unable to recover it. 00:26:48.191 [2024-11-07 10:55:15.555225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.555422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.555602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.555706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.555870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.555965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.555980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.556972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.556987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.557967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.557982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.558844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.559899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.559914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.560064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.560079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.560225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.560240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.192 [2024-11-07 10:55:15.560391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.192 [2024-11-07 10:55:15.560406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.192 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.560497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.560512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.560717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.560733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.560816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.560832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.560912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.560927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.561941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.561955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.562948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.562963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.563969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.563984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.564073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.564087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.564158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.564172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.564343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.564358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.564447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.564462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.193 [2024-11-07 10:55:15.564630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.193 [2024-11-07 10:55:15.564645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.193 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.564788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.564804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 Malloc0 00:26:48.194 [2024-11-07 10:55:15.564949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.564963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.194 [2024-11-07 10:55:15.565654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.565918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.565933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:48.194 [2024-11-07 10:55:15.566141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.566253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.194 [2024-11-07 10:55:15.566344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.566446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.566529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 [2024-11-07 10:55:15.566635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.566862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.566957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.566971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.567829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.567986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.568896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.194 [2024-11-07 10:55:15.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.194 qpair failed and we were unable to recover it. 00:26:48.194 [2024-11-07 10:55:15.569047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.569911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.569926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8978000b90 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.570922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.570936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.571956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.572057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.572146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.572232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.572321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.195 [2024-11-07 10:55:15.572464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.195 [2024-11-07 10:55:15.572479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.195 qpair failed and we were unable to recover it. 00:26:48.195 [2024-11-07 10:55:15.572549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.572562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.572654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.572668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.572768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.572782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.572867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.572882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.573945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.573955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.574977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.575907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.575993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.576004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.576082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.196 [2024-11-07 10:55:15.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.196 qpair failed and we were unable to recover it. 00:26:48.196 [2024-11-07 10:55:15.576173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.576959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.576969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.577922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.577994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.578905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.578986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.197 qpair failed and we were unable to recover it. 00:26:48.197 [2024-11-07 10:55:15.579799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.197 [2024-11-07 10:55:15.579813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.579887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.579901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.579971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.579985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.580912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.580926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.198 [2024-11-07 10:55:15.581175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.581287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.198 [2024-11-07 10:55:15.581515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.581616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.198 [2024-11-07 10:55:15.581779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.581883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.581899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.198 [2024-11-07 10:55:15.582004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.582948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.582963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.583931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.583945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1dbe0 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.584033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.198 [2024-11-07 10:55:15.584046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.198 qpair failed and we were unable to recover it. 00:26:48.198 [2024-11-07 10:55:15.584119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.584871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.584882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.585938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.585999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.586950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.586960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.587172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.587265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.587275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.587335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.587345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.199 [2024-11-07 10:55:15.587475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.199 [2024-11-07 10:55:15.587486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.199 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.587979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.588899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.588910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.200 [2024-11-07 10:55:15.589055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.200 [2024-11-07 10:55:15.589391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.589953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.589964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.200 [2024-11-07 10:55:15.590121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.590133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.590194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.590205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.590294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.590304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.590370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.590380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.590515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.200 [2024-11-07 10:55:15.590525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.200 qpair failed and we were unable to recover it. 00:26:48.200 [2024-11-07 10:55:15.590590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.590601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.590677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.590688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.590783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.590794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.590870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.590880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.590946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.590956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.591851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.591861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.592987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.592997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.593076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.593170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.593309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.593457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.201 [2024-11-07 10:55:15.593570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.201 qpair failed and we were unable to recover it. 00:26:48.201 [2024-11-07 10:55:15.593638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.593648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.593722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.593810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.593964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.593974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.594870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.594880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.595953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.595963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.596885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.596895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.597034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.202 [2024-11-07 10:55:15.597099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.597109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.202 [2024-11-07 10:55:15.597171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.202 [2024-11-07 10:55:15.597180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.202 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.203 [2024-11-07 10:55:15.597425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.203 [2024-11-07 10:55:15.597823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.597893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.597903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.598929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.598939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.599894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.599906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.600037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.600047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.600110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.600120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.600192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.600202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.203 qpair failed and we were unable to recover it. 00:26:48.203 [2024-11-07 10:55:15.600265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.203 [2024-11-07 10:55:15.600275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.600337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.204 [2024-11-07 10:55:15.600347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.600410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.204 [2024-11-07 10:55:15.600420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.600489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.204 [2024-11-07 10:55:15.600500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8970000b90 with addr=10.0.0.2, port=4420 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.600865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.204 [2024-11-07 10:55:15.603104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.603182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.603200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.603208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.603216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.603238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:48.204 [2024-11-07 10:55:15.613007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.613071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.613087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.613094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.613101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.613117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.204 10:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2841892 00:26:48.204 [2024-11-07 10:55:15.622951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.623006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.623021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.623027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.623034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.623049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.633083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.633151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.633166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.633172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.633179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.633195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.642955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.643063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.643078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.643085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.643091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.643106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.652957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.653049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.653064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.653071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.653077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.653092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.663053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.663109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.663123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.663130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.663136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.663151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.673013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.673071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.673088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.673095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.673102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.673117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.683079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.683137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.683152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.683159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.683165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.204 [2024-11-07 10:55:15.683180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.204 qpair failed and we were unable to recover it. 00:26:48.204 [2024-11-07 10:55:15.693144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.204 [2024-11-07 10:55:15.693204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.204 [2024-11-07 10:55:15.693218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.204 [2024-11-07 10:55:15.693225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.204 [2024-11-07 10:55:15.693231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.693246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.703171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.703230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.703244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.703251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.703257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.703272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.713225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.713293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.713307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.713314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.713325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.713339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.723265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.723323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.723338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.723344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.723350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.723366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.733189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.733245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.733259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.733266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.733272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.733287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.743281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.743339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.743353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.743360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.743366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.743381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.753254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.753313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.753328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.753335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.753341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.753356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.763367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.763426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.763444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.763452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.763458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.763473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.773394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.773462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.773476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.773483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.773489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.773505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.783320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.783374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.783388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.783394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.783400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.783414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.793363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.793421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.793439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.793446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.793452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.793467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.205 [2024-11-07 10:55:15.803430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.205 [2024-11-07 10:55:15.803536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.205 [2024-11-07 10:55:15.803555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.205 [2024-11-07 10:55:15.803562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.205 [2024-11-07 10:55:15.803569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.205 [2024-11-07 10:55:15.803584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.205 qpair failed and we were unable to recover it. 00:26:48.465 [2024-11-07 10:55:15.813477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.465 [2024-11-07 10:55:15.813550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.465 [2024-11-07 10:55:15.813568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.465 [2024-11-07 10:55:15.813575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.465 [2024-11-07 10:55:15.813581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.465 [2024-11-07 10:55:15.813596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.465 qpair failed and we were unable to recover it. 00:26:48.465 [2024-11-07 10:55:15.823489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.465 [2024-11-07 10:55:15.823559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.465 [2024-11-07 10:55:15.823573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.465 [2024-11-07 10:55:15.823580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.465 [2024-11-07 10:55:15.823586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.465 [2024-11-07 10:55:15.823601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.465 qpair failed and we were unable to recover it. 00:26:48.465 [2024-11-07 10:55:15.833587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.465 [2024-11-07 10:55:15.833644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.465 [2024-11-07 10:55:15.833658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.465 [2024-11-07 10:55:15.833665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.833671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.833686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.843510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.843565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.843578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.843586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.843595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.843610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.853561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.853650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.853664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.853671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.853678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.853693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.863617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.863676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.863690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.863697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.863703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.863718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.873609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.873669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.873683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.873690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.873695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.873710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.883630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.883692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.883706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.883713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.883719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.883735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.893703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.893758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.893772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.893779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.893785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.893801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.903782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.903835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.903849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.903855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.903862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.903877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.913751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.913811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.913825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.913832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.913838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.913852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.923723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.923775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.923789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.923796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.923802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.923818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.933836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.933934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.933952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.933959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.933965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.933980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.466 [2024-11-07 10:55:15.943880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.466 [2024-11-07 10:55:15.943937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.466 [2024-11-07 10:55:15.943951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.466 [2024-11-07 10:55:15.943958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.466 [2024-11-07 10:55:15.943964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.466 [2024-11-07 10:55:15.943978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.466 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:15.953875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:15.953934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:15.953947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:15.953954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:15.953960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:15.953975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:15.963912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:15.963968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:15.963982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:15.963989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:15.963995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:15.964011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:15.973932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:15.973989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:15.974003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:15.974013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:15.974019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:15.974034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:15.983958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:15.984015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:15.984028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:15.984035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:15.984041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:15.984056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:15.993991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:15.994051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:15.994065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:15.994072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:15.994078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:15.994094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.004077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.004134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.004148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.004155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.004161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.004176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.014062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.014113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.014127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.014135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.014141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.014160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.024004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.024101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.024115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.024122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.024129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.024144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.034096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.034156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.034169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.034176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.034182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.034197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.044093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.044169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.044185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.044192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.044198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.044213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.054147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.054198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.054213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.054219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.467 [2024-11-07 10:55:16.054225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.467 [2024-11-07 10:55:16.054240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.467 qpair failed and we were unable to recover it. 00:26:48.467 [2024-11-07 10:55:16.064107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.467 [2024-11-07 10:55:16.064165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.467 [2024-11-07 10:55:16.064179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.467 [2024-11-07 10:55:16.064186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.064192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.064207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.074208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.074273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.074287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.074293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.074300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.074315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.084248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.084303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.084318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.084325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.084331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.084346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.094262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.094319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.094333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.094339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.094345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.094360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.104294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.104348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.104363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.104373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.104379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.104394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.114339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.114396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.114410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.114417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.114423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.114443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.468 [2024-11-07 10:55:16.124396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.468 [2024-11-07 10:55:16.124458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.468 [2024-11-07 10:55:16.124472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.468 [2024-11-07 10:55:16.124479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.468 [2024-11-07 10:55:16.124485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.468 [2024-11-07 10:55:16.124500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.468 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.134395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.134450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.134465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.134472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.134478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.134493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.144438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.144495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.144509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.144516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.144522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.144540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.154460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.154523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.154538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.154545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.154552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.154566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.164480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.164569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.164584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.164591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.164597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.164612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.174518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.174583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.174597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.174604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.174611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.174627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.184470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.184528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.184542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.184549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.184555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.184570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.194570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.194628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.194642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.194649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.194655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.194670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.728 qpair failed and we were unable to recover it. 00:26:48.728 [2024-11-07 10:55:16.204657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.728 [2024-11-07 10:55:16.204763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.728 [2024-11-07 10:55:16.204778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.728 [2024-11-07 10:55:16.204785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.728 [2024-11-07 10:55:16.204791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.728 [2024-11-07 10:55:16.204806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.214649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.214706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.214720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.214727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.214733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.214748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.224646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.224705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.224719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.224726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.224732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.224747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.234690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.234746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.234764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.234771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.234777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.234792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.244753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.244843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.244857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.244864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.244870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.244885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.254735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.254784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.254799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.254805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.254811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.254826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.264766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.264824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.264837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.264844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.264850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.264865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.274796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.274852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.274866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.274873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.274883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.274898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.284853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.284913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.284927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.284934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.284940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.284954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.294910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.294970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.294983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.294990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.294996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.295011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.304934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.305036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.305050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.305057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.305064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.729 [2024-11-07 10:55:16.305079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.729 qpair failed and we were unable to recover it. 00:26:48.729 [2024-11-07 10:55:16.314888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.729 [2024-11-07 10:55:16.314977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.729 [2024-11-07 10:55:16.314992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.729 [2024-11-07 10:55:16.314999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.729 [2024-11-07 10:55:16.315005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.315020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.324953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.325010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.325024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.325031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.325037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.325052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.334988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.335041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.335055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.335061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.335068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.335082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.345007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.345063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.345077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.345084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.345090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.345104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.355055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.355110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.355124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.355131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.355137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.355152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.365103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.365162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.365179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.365186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.365192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.365208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.375097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.375154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.375168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.375175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.375181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.375197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.730 [2024-11-07 10:55:16.385171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.730 [2024-11-07 10:55:16.385225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.730 [2024-11-07 10:55:16.385238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.730 [2024-11-07 10:55:16.385245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.730 [2024-11-07 10:55:16.385251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.730 [2024-11-07 10:55:16.385266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.730 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.395160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.395216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.395230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.395236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.395243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.395257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.405276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.405341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.405355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.405362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.405371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.405386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.415280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.415338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.415352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.415359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.415365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.415381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.425216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.425272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.425286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.425293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.425299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.425314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.435300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.435356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.435371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.435379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.435385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.435400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.445294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.445352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.445366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.445373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.445379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.445394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.455376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.455440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.455455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.455462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.455468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.455483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.465346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.465401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.465414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.465421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.465427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.465445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.475371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.475427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.475445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.991 [2024-11-07 10:55:16.475452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.991 [2024-11-07 10:55:16.475458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.991 [2024-11-07 10:55:16.475472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.991 qpair failed and we were unable to recover it. 00:26:48.991 [2024-11-07 10:55:16.485399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.991 [2024-11-07 10:55:16.485459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.991 [2024-11-07 10:55:16.485473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.485479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.485485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.485501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.495428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.495493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.495506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.495513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.495519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.495534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.505461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.505536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.505550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.505556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.505563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.505577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.515509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.515569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.515583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.515589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.515595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.515610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.525540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.525598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.525612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.525618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.525625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.525640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.535539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.535595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.535609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.535619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.535626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.535641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.545598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.545651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.545665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.545672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.545678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.545692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.555588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.555645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.555659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.555666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.555672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.555687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.565651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.565724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.565742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.565749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.565756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.565771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.575672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.575727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.575741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.575748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.575754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.575773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.585620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.585673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.585687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.585694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.992 [2024-11-07 10:55:16.585700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.992 [2024-11-07 10:55:16.585715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.992 qpair failed and we were unable to recover it. 00:26:48.992 [2024-11-07 10:55:16.595736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.992 [2024-11-07 10:55:16.595794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.992 [2024-11-07 10:55:16.595808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.992 [2024-11-07 10:55:16.595815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.595821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.595836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.605749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.605802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.605816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.605823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.605829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.605844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.615782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.615848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.615862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.615869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.615875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.615890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.625780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.625842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.625855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.625862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.625868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.625883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.635861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.635923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.635936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.635944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.635949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.635964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.645863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.645920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.645934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.645941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.645947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.645962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:48.993 [2024-11-07 10:55:16.655895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.993 [2024-11-07 10:55:16.655949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.993 [2024-11-07 10:55:16.655963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.993 [2024-11-07 10:55:16.655970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.993 [2024-11-07 10:55:16.655976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:48.993 [2024-11-07 10:55:16.655991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.993 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.665972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.666025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.666039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.666048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.666054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.666069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.675954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.676010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.676024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.676031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.676038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.676053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.686013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.686074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.686087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.686094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.686101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.686116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.696050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.696102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.696116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.696122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.696128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.696143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.706031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.706133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.706148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.706155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.706161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.706179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.716059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.716132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.716147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.716154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.716160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.716179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.726089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.726170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.726185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.726192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.726199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.726214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.736153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.736215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.736229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.736236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.253 [2024-11-07 10:55:16.736242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.253 [2024-11-07 10:55:16.736257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.253 qpair failed and we were unable to recover it. 00:26:49.253 [2024-11-07 10:55:16.746156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.253 [2024-11-07 10:55:16.746207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.253 [2024-11-07 10:55:16.746222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.253 [2024-11-07 10:55:16.746229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.746235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.746250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.756199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.756260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.756274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.756281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.756287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.756303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.766244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.766308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.766322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.766330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.766336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.766352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.776235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.776291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.776304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.776311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.776317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.776332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.786264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.786318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.786331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.786338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.786344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.786359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.796297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.796370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.796388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.796395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.796401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.796417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.806333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.806384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.806398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.806405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.806410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.806425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.816379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.816484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.816499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.816506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.816512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.816527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.826378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.826471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.826486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.826493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.826499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.826514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.836392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.836468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.836483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.836490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.836499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.836518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.846445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.846520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.846533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.846540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.846546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.254 [2024-11-07 10:55:16.846565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.254 qpair failed and we were unable to recover it. 00:26:49.254 [2024-11-07 10:55:16.856491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.254 [2024-11-07 10:55:16.856548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.254 [2024-11-07 10:55:16.856562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.254 [2024-11-07 10:55:16.856568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.254 [2024-11-07 10:55:16.856574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.856589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.866496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.866545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.866558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.866565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.866571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.866586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.876539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.876597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.876610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.876617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.876623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.876638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.886563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.886620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.886634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.886641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.886647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.886662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.896602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.896659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.896672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.896679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.896684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.896699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.906627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.906686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.906700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.906706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.906712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.906727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.255 [2024-11-07 10:55:16.916697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.255 [2024-11-07 10:55:16.916755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.255 [2024-11-07 10:55:16.916768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.255 [2024-11-07 10:55:16.916775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.255 [2024-11-07 10:55:16.916781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.255 [2024-11-07 10:55:16.916796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.255 qpair failed and we were unable to recover it. 00:26:49.516 [2024-11-07 10:55:16.926709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.516 [2024-11-07 10:55:16.926762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.516 [2024-11-07 10:55:16.926780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.516 [2024-11-07 10:55:16.926790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.516 [2024-11-07 10:55:16.926799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.516 [2024-11-07 10:55:16.926815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.516 qpair failed and we were unable to recover it. 00:26:49.516 [2024-11-07 10:55:16.936723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.516 [2024-11-07 10:55:16.936805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.516 [2024-11-07 10:55:16.936819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.516 [2024-11-07 10:55:16.936826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.516 [2024-11-07 10:55:16.936832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.516 [2024-11-07 10:55:16.936847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.516 qpair failed and we were unable to recover it. 00:26:49.516 [2024-11-07 10:55:16.946805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.516 [2024-11-07 10:55:16.946858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.516 [2024-11-07 10:55:16.946872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.946879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.946885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.946901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:16.956851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:16.956958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:16.956973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.956980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.956986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.957002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:16.966834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:16.966908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:16.966922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.966929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.966940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.966955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:16.976860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:16.976917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:16.976932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.976939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.976945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.976961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:16.986834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:16.986892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:16.986906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.986913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.986919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.986934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:16.996920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:16.996976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:16.996990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:16.996997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:16.997003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:16.997018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.006895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.006954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.006969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.006975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.006982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.006997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.016917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.016973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.016987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.016994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.017000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.017015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.026958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.027009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.027022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.027029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.027035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.027050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.036971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.037029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.037042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.037049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.037055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.037070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.047112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.047185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.047200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.047207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.047213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.047228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.057172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.057228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.517 [2024-11-07 10:55:17.057242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.517 [2024-11-07 10:55:17.057249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.517 [2024-11-07 10:55:17.057255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.517 [2024-11-07 10:55:17.057269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.517 qpair failed and we were unable to recover it. 00:26:49.517 [2024-11-07 10:55:17.067202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.517 [2024-11-07 10:55:17.067258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.067272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.067279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.067285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.067300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.077207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.077274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.077290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.077297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.077304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.077320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.087219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.087273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.087287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.087294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.087301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.087315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.097238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.097295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.097310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.097323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.097330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.097345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.107278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.107329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.107344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.107351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.107357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.107373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.117211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.117269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.117283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.117290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.117296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.117311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.127296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.127352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.127366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.127374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.127379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.127395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.137322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.137371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.137385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.137391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.137397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.137415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.147349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.147419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.147437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.147444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.147450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.147466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.157407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.157470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.157484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.157491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.157497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.157512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.167358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.167416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.167430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.167442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.167448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.167464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.518 [2024-11-07 10:55:17.177442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.518 [2024-11-07 10:55:17.177498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.518 [2024-11-07 10:55:17.177512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.518 [2024-11-07 10:55:17.177519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.518 [2024-11-07 10:55:17.177525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.518 [2024-11-07 10:55:17.177540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.518 qpair failed and we were unable to recover it. 00:26:49.778 [2024-11-07 10:55:17.187491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.778 [2024-11-07 10:55:17.187554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.778 [2024-11-07 10:55:17.187568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.778 [2024-11-07 10:55:17.187575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.778 [2024-11-07 10:55:17.187581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.778 [2024-11-07 10:55:17.187596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.778 qpair failed and we were unable to recover it. 00:26:49.778 [2024-11-07 10:55:17.197508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.778 [2024-11-07 10:55:17.197564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.778 [2024-11-07 10:55:17.197577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.778 [2024-11-07 10:55:17.197585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.778 [2024-11-07 10:55:17.197590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.778 [2024-11-07 10:55:17.197605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.778 qpair failed and we were unable to recover it. 00:26:49.778 [2024-11-07 10:55:17.207583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.778 [2024-11-07 10:55:17.207677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.778 [2024-11-07 10:55:17.207692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.778 [2024-11-07 10:55:17.207699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.778 [2024-11-07 10:55:17.207705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.778 [2024-11-07 10:55:17.207721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.778 qpair failed and we were unable to recover it. 00:26:49.778 [2024-11-07 10:55:17.217547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.778 [2024-11-07 10:55:17.217601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.217614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.217621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.217630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.217646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.227571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.227633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.227651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.227658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.227665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.227681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.237565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.237643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.237658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.237664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.237670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.237685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.247642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.247693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.247707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.247714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.247720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.247735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.257656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.257717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.257732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.257739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.257746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.257762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.268141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.268195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.268232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.268240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.268246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.268276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.277730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.277790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.277806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.277813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.277819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.277834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.287791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.287885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.287901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.287908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.287914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.287931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.297789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.297844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.297858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.297865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.297871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.297886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.307767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.307823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.307837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.307844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.307850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.307865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.317850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.779 [2024-11-07 10:55:17.317911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.779 [2024-11-07 10:55:17.317925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.779 [2024-11-07 10:55:17.317931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.779 [2024-11-07 10:55:17.317938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.779 [2024-11-07 10:55:17.317953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.779 qpair failed and we were unable to recover it. 00:26:49.779 [2024-11-07 10:55:17.327809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.327865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.327879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.327886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.327892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.327907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.337824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.337879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.337892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.337898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.337905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.337919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.347864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.347921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.347934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.347941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.347947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.347962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.357952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.358030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.358050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.358057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.358063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.358078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.368026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.368077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.368090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.368097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.368102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.368117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.378005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.378054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.378068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.378075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.378081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.378096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.387991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.388057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.388071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.388077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.388084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.388099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.398069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.398125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.398139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.398146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.398155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.398171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.408058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.408155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.408170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.408177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.408184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.408198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.418149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.418197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.418211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.418218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.418223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.418238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.428143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.428200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.428214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.780 [2024-11-07 10:55:17.428221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.780 [2024-11-07 10:55:17.428227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.780 [2024-11-07 10:55:17.428241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.780 qpair failed and we were unable to recover it. 00:26:49.780 [2024-11-07 10:55:17.438148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.780 [2024-11-07 10:55:17.438206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.780 [2024-11-07 10:55:17.438220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.781 [2024-11-07 10:55:17.438227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.781 [2024-11-07 10:55:17.438233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:49.781 [2024-11-07 10:55:17.438248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.781 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.448255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.448318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.448332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.448339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.448345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.448359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.458292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.458347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.458361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.458368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.458374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.458389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.468344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.468426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.468444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.468451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.468457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.468472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.478284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.478356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.478370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.478377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.478383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.478397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.488316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.488376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.488393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.488399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.488406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.488420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.498344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.498401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.498414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.498421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.498427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.498447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.508423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.508525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.508540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.508547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.508553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.508568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.518407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.518484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.518500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.518507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.518513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.518528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.528436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.528491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.528504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.528515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.528521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.528536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.538501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.538557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.538571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.538578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.538584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.538599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.548492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.041 [2024-11-07 10:55:17.548550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.041 [2024-11-07 10:55:17.548563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.041 [2024-11-07 10:55:17.548570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.041 [2024-11-07 10:55:17.548577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.041 [2024-11-07 10:55:17.548593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.041 qpair failed and we were unable to recover it. 00:26:50.041 [2024-11-07 10:55:17.558504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.558561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.558575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.558582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.558588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.558603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.568537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.568620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.568634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.568641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.568647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.568662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.578520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.578607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.578621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.578628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.578634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.578650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.588583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.588636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.588650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.588657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.588664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.588678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.598642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.598700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.598713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.598720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.598726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.598740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.608671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.608730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.608744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.608751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.608757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.608773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.618699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.618759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.618773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.618780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.618786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.618800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.628735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.628805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.628819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.628826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.628833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.628847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.638787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.638844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.638858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.638865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.638871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.638886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.648803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.648861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.648875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.648882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.648888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.648902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.658825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.042 [2024-11-07 10:55:17.658915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.042 [2024-11-07 10:55:17.658929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.042 [2024-11-07 10:55:17.658940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.042 [2024-11-07 10:55:17.658946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.042 [2024-11-07 10:55:17.658961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.042 qpair failed and we were unable to recover it. 00:26:50.042 [2024-11-07 10:55:17.668817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.043 [2024-11-07 10:55:17.668910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.043 [2024-11-07 10:55:17.668925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.043 [2024-11-07 10:55:17.668931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.043 [2024-11-07 10:55:17.668937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.043 [2024-11-07 10:55:17.668952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.043 qpair failed and we were unable to recover it. 00:26:50.043 [2024-11-07 10:55:17.678884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.043 [2024-11-07 10:55:17.678943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.043 [2024-11-07 10:55:17.678957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.043 [2024-11-07 10:55:17.678964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.043 [2024-11-07 10:55:17.678970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.043 [2024-11-07 10:55:17.678986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.043 qpair failed and we were unable to recover it. 00:26:50.043 [2024-11-07 10:55:17.688921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.043 [2024-11-07 10:55:17.688976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.043 [2024-11-07 10:55:17.688990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.043 [2024-11-07 10:55:17.688997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.043 [2024-11-07 10:55:17.689003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.043 [2024-11-07 10:55:17.689018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.043 qpair failed and we were unable to recover it. 00:26:50.043 [2024-11-07 10:55:17.698966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.043 [2024-11-07 10:55:17.699024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.043 [2024-11-07 10:55:17.699038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.043 [2024-11-07 10:55:17.699045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.043 [2024-11-07 10:55:17.699051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.043 [2024-11-07 10:55:17.699069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.043 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.708975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.709049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.709063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.709070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.709076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.709091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.719044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.719126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.719139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.719146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.719152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.719166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.729027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.729082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.729096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.729102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.729108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.729122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.739047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.739130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.739144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.739151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.739158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.739173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.749090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.749147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.749160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.749166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.749172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.749187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.759112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.759171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.759184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.759190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.759197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.759211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.769141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.769195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.769208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.769215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.769221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.769235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.779170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.303 [2024-11-07 10:55:17.779226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.303 [2024-11-07 10:55:17.779240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.303 [2024-11-07 10:55:17.779246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.303 [2024-11-07 10:55:17.779253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.303 [2024-11-07 10:55:17.779267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.303 qpair failed and we were unable to recover it. 00:26:50.303 [2024-11-07 10:55:17.789209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.789260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.789277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.789283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.789289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.789304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.799231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.799288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.799302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.799309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.799314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.799329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.809264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.809343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.809358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.809365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.809371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.809385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.819289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.819344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.819357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.819364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.819370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.819385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.829309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.829361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.829375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.829381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.829387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.829404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.839333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.839391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.839405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.839412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.839418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.839436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.849371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.849424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.849441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.849448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.849454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.849470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.859445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.859505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.859519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.859526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.859532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.859548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.869454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.869517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.869531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.869538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.869544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.869559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.879465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.879536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.879549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.879556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.879563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.879578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.889493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.889545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.889559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.304 [2024-11-07 10:55:17.889566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.304 [2024-11-07 10:55:17.889572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.304 [2024-11-07 10:55:17.889587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.304 qpair failed and we were unable to recover it. 00:26:50.304 [2024-11-07 10:55:17.899535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.304 [2024-11-07 10:55:17.899592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.304 [2024-11-07 10:55:17.899605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.899612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.899618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.899633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.909546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.909602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.909616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.909622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.909628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.909643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.919570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.919625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.919642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.919649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.919655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.919669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.929599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.929660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.929674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.929681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.929687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.929702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.939652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.939713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.939727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.939733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.939739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.939755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.949654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.949709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.949723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.949730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.949735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.949750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.305 [2024-11-07 10:55:17.959612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.305 [2024-11-07 10:55:17.959668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.305 [2024-11-07 10:55:17.959682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.305 [2024-11-07 10:55:17.959688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.305 [2024-11-07 10:55:17.959698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.305 [2024-11-07 10:55:17.959713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.305 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:17.969675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:17.969772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:17.969787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:17.969794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:17.969800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:17.969815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:17.979758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:17.979814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:17.979828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:17.979835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:17.979841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:17.979856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:17.989797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:17.989851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:17.989864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:17.989871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:17.989877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:17.989892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:17.999808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:17.999866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:17.999880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:17.999887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:17.999893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:17.999908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:18.009821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:18.009875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:18.009889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:18.009895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:18.009901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:18.009916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:18.019872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:18.019922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:18.019936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:18.019942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:18.019948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:18.019963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.565 qpair failed and we were unable to recover it. 00:26:50.565 [2024-11-07 10:55:18.029881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.565 [2024-11-07 10:55:18.029929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.565 [2024-11-07 10:55:18.029943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.565 [2024-11-07 10:55:18.029949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.565 [2024-11-07 10:55:18.029955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.565 [2024-11-07 10:55:18.029970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.039847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.039906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.039920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.039927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.039933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.039948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.049957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.050015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.050032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.050039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.050044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.050059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.059958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.060013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.060027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.060033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.060039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.060054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.070001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.070063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.070076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.070083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.070089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.070104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.080025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.080089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.080103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.080109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.080116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.080131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.090055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.090108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.090122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.090134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.090140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.090155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.100062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.100115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.100128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.100135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.100141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.100156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.110027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.110081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.110095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.110101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.110107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.110122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.120129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.120208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.120223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.120229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.120235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.120250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.130121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.130202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.130216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.130223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.130229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.130243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.140197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.566 [2024-11-07 10:55:18.140269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.566 [2024-11-07 10:55:18.140284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.566 [2024-11-07 10:55:18.140291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.566 [2024-11-07 10:55:18.140298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.566 [2024-11-07 10:55:18.140312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.566 qpair failed and we were unable to recover it. 00:26:50.566 [2024-11-07 10:55:18.150211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.150262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.150277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.150284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.150291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.150306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.160163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.160236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.160250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.160257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.160268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.160283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.170269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.170326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.170342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.170349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.170356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.170371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.180298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.180354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.180369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.180376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.180382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.180397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.190326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.190384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.190398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.190405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.190411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.190426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.200384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.200446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.200460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.200467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.200473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.200488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.210395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.210468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.210484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.210490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.210496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.210511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.220411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.220466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.220480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.220491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.220497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.220512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.567 [2024-11-07 10:55:18.230455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.567 [2024-11-07 10:55:18.230512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.567 [2024-11-07 10:55:18.230527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.567 [2024-11-07 10:55:18.230534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.567 [2024-11-07 10:55:18.230541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.567 [2024-11-07 10:55:18.230557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.567 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.240481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.240550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.240564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.240571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.240577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.240592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.250500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.250557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.250570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.250577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.250583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.250598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.260530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.260586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.260599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.260606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.260612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.260630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.270563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.270617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.270631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.270638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.270644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.270658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.280589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.280648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.280661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.280668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.280674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.280689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.290641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.290697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.290712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.290719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.290726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.290742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.300646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.300700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.300714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.300721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.300728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.300742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.310699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.310757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.828 [2024-11-07 10:55:18.310771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.828 [2024-11-07 10:55:18.310778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.828 [2024-11-07 10:55:18.310784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.828 [2024-11-07 10:55:18.310799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.828 qpair failed and we were unable to recover it. 00:26:50.828 [2024-11-07 10:55:18.320737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.828 [2024-11-07 10:55:18.320841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.320855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.320862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.320868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.320883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.330738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.330796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.330810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.330817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.330823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.330837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.340819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.340900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.340914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.340921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.340927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.340942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.350781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.350834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.350850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.350857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.350863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.350877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.360818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.360872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.360886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.360893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.360898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.360912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.370779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.370837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.370851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.370857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.370864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.370878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.380845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.380903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.380917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.380924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.380930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.380945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.390861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.390913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.390926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.390933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.390943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.390958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.400924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.400981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.400995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.401003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.401010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.401025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.410978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.411044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.411059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.411066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.411072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.411086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.421065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.421121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.829 [2024-11-07 10:55:18.421134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.829 [2024-11-07 10:55:18.421141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.829 [2024-11-07 10:55:18.421147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.829 [2024-11-07 10:55:18.421162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.829 qpair failed and we were unable to recover it. 00:26:50.829 [2024-11-07 10:55:18.430990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.829 [2024-11-07 10:55:18.431043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.431057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.431064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.431069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.431084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.441037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.441097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.441111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.441117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.441123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.441138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.451024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.451080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.451095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.451101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.451107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.451122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.461083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.461135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.461147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.461154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.461160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.461174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.471135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.471187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.471201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.471208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.471214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.471229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.481229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.481313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.481331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.481338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.481344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.481359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:50.830 [2024-11-07 10:55:18.491193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.830 [2024-11-07 10:55:18.491251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.830 [2024-11-07 10:55:18.491265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.830 [2024-11-07 10:55:18.491272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.830 [2024-11-07 10:55:18.491278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:50.830 [2024-11-07 10:55:18.491293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.830 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.501235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.501316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.501331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.090 [2024-11-07 10:55:18.501338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.090 [2024-11-07 10:55:18.501345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.090 [2024-11-07 10:55:18.501360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.090 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.511260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.511313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.511327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.090 [2024-11-07 10:55:18.511334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.090 [2024-11-07 10:55:18.511340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.090 [2024-11-07 10:55:18.511355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.090 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.521307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.521384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.521399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.090 [2024-11-07 10:55:18.521406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.090 [2024-11-07 10:55:18.521415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.090 [2024-11-07 10:55:18.521437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.090 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.531338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.531394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.531408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.090 [2024-11-07 10:55:18.531415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.090 [2024-11-07 10:55:18.531421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.090 [2024-11-07 10:55:18.531439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.090 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.541341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.541395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.541409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.090 [2024-11-07 10:55:18.541415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.090 [2024-11-07 10:55:18.541422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.090 [2024-11-07 10:55:18.541441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.090 qpair failed and we were unable to recover it. 00:26:51.090 [2024-11-07 10:55:18.551354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.090 [2024-11-07 10:55:18.551411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.090 [2024-11-07 10:55:18.551425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.551431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.551443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.551457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.561390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.561453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.561466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.561473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.561479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.561494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.571419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.571479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.571493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.571499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.571506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.571520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.581435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.581490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.581505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.581512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.581518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.581533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.591389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.591454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.591468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.591475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.591481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.591496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.601491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.601550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.601564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.601571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.601577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.601591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.611547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.611602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.611620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.611627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.611633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.611649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.621628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.621686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.621700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.621707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.621713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.621729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.631570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.631623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.631636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.631643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.631649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.631663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.641629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.641690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.641704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.641711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.641717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.641732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.651653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.651710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.651723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.651733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.651739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.651755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.661673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.091 [2024-11-07 10:55:18.661730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.091 [2024-11-07 10:55:18.661743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.091 [2024-11-07 10:55:18.661750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.091 [2024-11-07 10:55:18.661756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.091 [2024-11-07 10:55:18.661771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.091 qpair failed and we were unable to recover it. 00:26:51.091 [2024-11-07 10:55:18.671631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.671688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.671702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.671708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.671714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.671729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.681742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.681798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.681812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.681819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.681824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.681839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.691767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.691818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.691832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.691839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.691845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.691859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.701837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.701900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.701914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.701921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.701928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.701942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.711848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.711903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.711917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.711923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.711929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.711944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.721806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.721865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.721879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.721886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.721892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.721907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.731813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.731872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.731886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.731892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.731899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.731914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.741920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.741985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.741998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.742005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.742011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.742026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.092 [2024-11-07 10:55:18.751902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.092 [2024-11-07 10:55:18.751967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.092 [2024-11-07 10:55:18.751981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.092 [2024-11-07 10:55:18.751988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.092 [2024-11-07 10:55:18.751994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.092 [2024-11-07 10:55:18.752009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.092 qpair failed and we were unable to recover it. 00:26:51.352 [2024-11-07 10:55:18.761907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.352 [2024-11-07 10:55:18.761964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.352 [2024-11-07 10:55:18.761978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.761984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.761990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.762005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.771921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.772015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.772029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.772036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.772043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.772058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.782028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.782084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.782098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.782107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.782114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.782129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.792059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.792113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.792126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.792133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.792138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.792153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.802085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.802144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.802158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.802164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.802170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.802185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.812052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.812157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.812172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.812179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.812186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.812201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.822156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.822231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.822245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.822252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.822258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.822279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.832204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.832258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.832272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.832278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.832284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.832299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.842210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.842269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.842282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.842289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.842296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.842310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.852233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.852293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.852307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.852313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.852319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.852334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.862251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.862306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.862319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.862326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.862332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.862347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.872301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.353 [2024-11-07 10:55:18.872358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.353 [2024-11-07 10:55:18.872372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.353 [2024-11-07 10:55:18.872379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.353 [2024-11-07 10:55:18.872385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.353 [2024-11-07 10:55:18.872400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.353 qpair failed and we were unable to recover it. 00:26:51.353 [2024-11-07 10:55:18.882327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.882440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.882455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.882463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.882469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.882485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.892351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.892409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.892422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.892429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.892439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.892454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.902289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.902345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.902359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.902366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.902372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.902386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.912455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.912560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.912578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.912585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.912591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.912607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.922436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.922495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.922509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.922516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.922522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.922537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.932471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.932527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.932541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.932547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.932553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.932569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.942408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.942468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.942482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.942489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.942495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.942510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.952506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.952558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.952571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.952578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.952587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.952602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.962537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.962617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.962632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.962639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.962646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.962662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.972569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.972626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.972640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.972648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.972653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.972669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.982646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.982704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.982718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.982725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.982731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.982746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:18.992637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.354 [2024-11-07 10:55:18.992690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.354 [2024-11-07 10:55:18.992704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.354 [2024-11-07 10:55:18.992710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.354 [2024-11-07 10:55:18.992716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.354 [2024-11-07 10:55:18.992730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.354 qpair failed and we were unable to recover it. 00:26:51.354 [2024-11-07 10:55:19.002690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.355 [2024-11-07 10:55:19.002751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.355 [2024-11-07 10:55:19.002765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.355 [2024-11-07 10:55:19.002772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.355 [2024-11-07 10:55:19.002778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.355 [2024-11-07 10:55:19.002793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.355 qpair failed and we were unable to recover it. 00:26:51.355 [2024-11-07 10:55:19.012690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.355 [2024-11-07 10:55:19.012746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.355 [2024-11-07 10:55:19.012759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.355 [2024-11-07 10:55:19.012766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.355 [2024-11-07 10:55:19.012773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.355 [2024-11-07 10:55:19.012788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.355 qpair failed and we were unable to recover it. 00:26:51.614 [2024-11-07 10:55:19.022713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.614 [2024-11-07 10:55:19.022763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.022777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.022783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.022789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.022804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.032770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.032829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.032842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.032849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.032855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.032870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.042779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.042837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.042854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.042861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.042867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.042882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.052809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.052865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.052879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.052885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.052891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.052906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.062836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.062890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.062904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.062910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.062916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.062931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.072853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.072909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.072922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.072929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.072935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.072950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.082877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.082933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.082947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.082955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.082964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.082979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.092925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.092983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.092997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.093004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.093010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.093025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.102942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.102999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.103012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.103018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.103025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.103039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.112971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.113025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.113039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.113046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.113051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.615 [2024-11-07 10:55:19.113066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.615 qpair failed and we were unable to recover it. 00:26:51.615 [2024-11-07 10:55:19.123006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.615 [2024-11-07 10:55:19.123063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.615 [2024-11-07 10:55:19.123076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.615 [2024-11-07 10:55:19.123083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.615 [2024-11-07 10:55:19.123089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.123104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.133075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.133134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.133147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.133154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.133160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.133175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.143058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.143113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.143127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.143133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.143139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.143154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.153080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.153157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.153172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.153179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.153185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.153200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.163136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.163196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.163210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.163217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.163223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.163237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.173157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.173211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.173229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.173236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.173242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.173256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.183169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.183221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.183235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.183242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.183248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.183263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.193222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.193279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.193293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.193300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.193306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.193321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.203230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.203286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.203300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.203307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.203313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.203327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.213286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.213343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.213356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.213367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.213373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.213388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.223290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.223346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.223359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.223367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.223373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.223388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.616 [2024-11-07 10:55:19.233251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.616 [2024-11-07 10:55:19.233310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.616 [2024-11-07 10:55:19.233325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.616 [2024-11-07 10:55:19.233333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.616 [2024-11-07 10:55:19.233339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.616 [2024-11-07 10:55:19.233355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.616 qpair failed and we were unable to recover it. 00:26:51.617 [2024-11-07 10:55:19.243375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.617 [2024-11-07 10:55:19.243447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.617 [2024-11-07 10:55:19.243462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.617 [2024-11-07 10:55:19.243469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.617 [2024-11-07 10:55:19.243476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.617 [2024-11-07 10:55:19.243491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-11-07 10:55:19.253380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.617 [2024-11-07 10:55:19.253441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.617 [2024-11-07 10:55:19.253455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.617 [2024-11-07 10:55:19.253462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.617 [2024-11-07 10:55:19.253468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.617 [2024-11-07 10:55:19.253483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-11-07 10:55:19.263408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.617 [2024-11-07 10:55:19.263469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.617 [2024-11-07 10:55:19.263483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.617 [2024-11-07 10:55:19.263490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.617 [2024-11-07 10:55:19.263496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.617 [2024-11-07 10:55:19.263511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.617 [2024-11-07 10:55:19.273440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.617 [2024-11-07 10:55:19.273513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.617 [2024-11-07 10:55:19.273527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.617 [2024-11-07 10:55:19.273534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.617 [2024-11-07 10:55:19.273541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.617 [2024-11-07 10:55:19.273556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.617 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.283462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.283518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.283532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.283539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.283545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.283559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.293500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.293559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.293573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.293580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.293586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.293603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.303543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.303605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.303619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.303626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.303632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.303647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.313577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.313636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.313658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.313665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.313676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.313696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.323592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.323652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.323667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.323673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.323680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.323695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.333610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.333668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.333682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.333689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.333695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.333710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.343690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.343752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.343766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.877 [2024-11-07 10:55:19.343776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.877 [2024-11-07 10:55:19.343783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.877 [2024-11-07 10:55:19.343798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.877 qpair failed and we were unable to recover it. 00:26:51.877 [2024-11-07 10:55:19.353667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.877 [2024-11-07 10:55:19.353724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.877 [2024-11-07 10:55:19.353737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.353744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.353750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.353764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.363705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.363767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.363781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.363787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.363793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.363808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.373725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.373783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.373797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.373804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.373810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.373825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.383725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.383786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.383800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.383807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.383813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.383832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.393774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.393825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.393839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.393846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.393852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.393867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.403838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.403897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.403911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.403917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.403923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.403938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.413781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.413842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.413856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.413863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.413869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.413884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.423862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.423921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.423934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.423941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.423947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.423962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.433882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.433937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.433951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.433957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.433963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.433978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.443947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.444031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.444046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.444053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.444059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.444074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.453939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.878 [2024-11-07 10:55:19.453996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.878 [2024-11-07 10:55:19.454009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.878 [2024-11-07 10:55:19.454016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.878 [2024-11-07 10:55:19.454021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.878 [2024-11-07 10:55:19.454036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.878 qpair failed and we were unable to recover it. 00:26:51.878 [2024-11-07 10:55:19.463988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.464045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.464058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.464064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.464071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.464085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.473994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.474052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.474069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.474077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.474083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.474098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.484028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.484091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.484105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.484112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.484118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.484133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.494060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.494114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.494128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.494135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.494141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.494156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.504079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.504133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.504147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.504154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.504159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.504174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.514103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.514155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.514169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.514176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.514187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.514203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.524139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.524195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.524209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.524215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.524221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.524236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:51.879 [2024-11-07 10:55:19.534174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.879 [2024-11-07 10:55:19.534228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.879 [2024-11-07 10:55:19.534242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.879 [2024-11-07 10:55:19.534248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.879 [2024-11-07 10:55:19.534255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:51.879 [2024-11-07 10:55:19.534269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.879 qpair failed and we were unable to recover it. 00:26:52.140 [2024-11-07 10:55:19.544201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.140 [2024-11-07 10:55:19.544254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.140 [2024-11-07 10:55:19.544268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.140 [2024-11-07 10:55:19.544275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.140 [2024-11-07 10:55:19.544281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.140 [2024-11-07 10:55:19.544296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.140 qpair failed and we were unable to recover it. 00:26:52.140 [2024-11-07 10:55:19.554242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.140 [2024-11-07 10:55:19.554295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.140 [2024-11-07 10:55:19.554309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.140 [2024-11-07 10:55:19.554316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.140 [2024-11-07 10:55:19.554322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.140 [2024-11-07 10:55:19.554336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.140 qpair failed and we were unable to recover it. 00:26:52.140 [2024-11-07 10:55:19.564258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.140 [2024-11-07 10:55:19.564320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.140 [2024-11-07 10:55:19.564334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.140 [2024-11-07 10:55:19.564341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.140 [2024-11-07 10:55:19.564347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.140 [2024-11-07 10:55:19.564362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.140 qpair failed and we were unable to recover it. 00:26:52.140 [2024-11-07 10:55:19.574286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.140 [2024-11-07 10:55:19.574344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.140 [2024-11-07 10:55:19.574357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.140 [2024-11-07 10:55:19.574364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.140 [2024-11-07 10:55:19.574370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.140 [2024-11-07 10:55:19.574385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.140 qpair failed and we were unable to recover it. 00:26:52.140 [2024-11-07 10:55:19.584312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.140 [2024-11-07 10:55:19.584370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.140 [2024-11-07 10:55:19.584384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.140 [2024-11-07 10:55:19.584390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.584397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.584412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.594353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.594409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.594422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.594429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.594439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.594454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.604375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.604438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.604456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.604462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.604468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.604483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.614396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.614455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.614469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.614475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.614482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.614497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.624392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.624452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.624466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.624473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.624480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.624495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.634447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.634504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.634518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.634525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.634531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.634546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.644490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.644547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.644561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.644567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.644578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.644593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.654515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.654567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.654581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.654588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.654594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.654608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.664539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.664589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.664603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.664610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.664615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.664631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.674561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.674615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.674629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.674635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.674641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.674656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.684600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.684655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.684668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.684675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.684681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.684695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.694625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.694681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.694695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.694702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.694708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.694724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.704587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.704636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.704650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.704656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.704662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.704677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.714713] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.714767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.714780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.714787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.714793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.714808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.724714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.724771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.724784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.724791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.724797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.724812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.734743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.734801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.734818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.734826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.734832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.734846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.744774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.744855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.744869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.744876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.744883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.744897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.754806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.754862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.754875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.754882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.754888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.754903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.764828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.764885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.764898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.764905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.764910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.764925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.774850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.774905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.774918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.774929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.774935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.774950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.784915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.141 [2024-11-07 10:55:19.784967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.141 [2024-11-07 10:55:19.784981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.141 [2024-11-07 10:55:19.784988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.141 [2024-11-07 10:55:19.784994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.141 [2024-11-07 10:55:19.785009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.141 qpair failed and we were unable to recover it. 00:26:52.141 [2024-11-07 10:55:19.794939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.142 [2024-11-07 10:55:19.794993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.142 [2024-11-07 10:55:19.795007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.142 [2024-11-07 10:55:19.795014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.142 [2024-11-07 10:55:19.795020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.142 [2024-11-07 10:55:19.795035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.142 qpair failed and we were unable to recover it. 00:26:52.142 [2024-11-07 10:55:19.804949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.142 [2024-11-07 10:55:19.805017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.142 [2024-11-07 10:55:19.805031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.142 [2024-11-07 10:55:19.805039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.142 [2024-11-07 10:55:19.805045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.142 [2024-11-07 10:55:19.805060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.142 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.814972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.815077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.815091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.815098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.815104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.815123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.824992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.825045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.825059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.825065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.825071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.825086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.834957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.835014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.835028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.835035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.835041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.835055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.845058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.845134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.845151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.845158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.845164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.845179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.855099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.855174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.855189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.855196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.855202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.855217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.865139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.865197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.865210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.865217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.865223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.865238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.875068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.875127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.875140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.875147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.875153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.875168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.885102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.885156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.885169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.885176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.885182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.885197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.895178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.895277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.895292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.895298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.895304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.895319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.905200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.905260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.905273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.905283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.905289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.905304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.915230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.915283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.915297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.915304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.915309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.915324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.925282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.925338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.925352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.925359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.925365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.925380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.935327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.935386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.935399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.935406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.935412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.935427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.945262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.945350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.945364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.945372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.945378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.945396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.955431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.955490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.955504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.955511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.955517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.955532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.965400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.965478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.965493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.965499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.965506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.965521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.975446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.975517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.975531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.975539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.975544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.975564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.985390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.985450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.985465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.985472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.985478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.985493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:19.995411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:19.995474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.402 [2024-11-07 10:55:19.995488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.402 [2024-11-07 10:55:19.995494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.402 [2024-11-07 10:55:19.995501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.402 [2024-11-07 10:55:19.995516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.402 qpair failed and we were unable to recover it. 00:26:52.402 [2024-11-07 10:55:20.005548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.402 [2024-11-07 10:55:20.005631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.005650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.005658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.005664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.005681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.015535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.015611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.015630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.015638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.015645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.015662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.025594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.025654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.025670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.025678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.025684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.025700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.035546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.035605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.035626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.035633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.035640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.035656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.045595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.045657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.045672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.045684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.045690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.045705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.055672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.055727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.055741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.055748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.055754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.055769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.403 [2024-11-07 10:55:20.065677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.403 [2024-11-07 10:55:20.065748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.403 [2024-11-07 10:55:20.065763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.403 [2024-11-07 10:55:20.065769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.403 [2024-11-07 10:55:20.065776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.403 [2024-11-07 10:55:20.065791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.403 qpair failed and we were unable to recover it. 00:26:52.663 [2024-11-07 10:55:20.075649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.663 [2024-11-07 10:55:20.075714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.663 [2024-11-07 10:55:20.075729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.663 [2024-11-07 10:55:20.075735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.663 [2024-11-07 10:55:20.075746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.663 [2024-11-07 10:55:20.075766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.663 qpair failed and we were unable to recover it. 00:26:52.663 [2024-11-07 10:55:20.085758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.663 [2024-11-07 10:55:20.085821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.663 [2024-11-07 10:55:20.085837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.663 [2024-11-07 10:55:20.085844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.663 [2024-11-07 10:55:20.085850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.663 [2024-11-07 10:55:20.085867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.663 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.095822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.095898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.095913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.095920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.095927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.095942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.105740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.105797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.105812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.105819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.105825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.105840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.115750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.115808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.115822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.115828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.115835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.115850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.125796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.125857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.125871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.125877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.125884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.125898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.135883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.135941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.135955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.135962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.135968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.135983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.145918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.145974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.145988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.145994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.146000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.146015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.155925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.155984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.155998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.156005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.156011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.156026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.165907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.165964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.165981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.165988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.165994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.166009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.175932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.175990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.176004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.176012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.176017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.176033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.185944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.185998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.186012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.186020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.186026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.186041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.196015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.196068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.196082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.196088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.196094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.664 [2024-11-07 10:55:20.196109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.664 qpair failed and we were unable to recover it. 00:26:52.664 [2024-11-07 10:55:20.206076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.664 [2024-11-07 10:55:20.206133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.664 [2024-11-07 10:55:20.206147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.664 [2024-11-07 10:55:20.206154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.664 [2024-11-07 10:55:20.206165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.206180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.216130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.216206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.216224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.216232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.216239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.216254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.226138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.226191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.226205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.226212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.226218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.226234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.236160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.236215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.236228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.236236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.236242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.236256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.246224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.246282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.246295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.246302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.246308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.246323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.256234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.256289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.256304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.256311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.256317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.256332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.266245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.266301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.266315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.266322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.266328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.266343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.276296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.276355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.276369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.276377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.276383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.276398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.286310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.286368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.286382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.286389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.286395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.286410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.296308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.296392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.296411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.296418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.296424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.296443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.306368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.306421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.306440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.306448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.306454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.306469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.316424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.316479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.316493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.316500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.316506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.665 [2024-11-07 10:55:20.316521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.665 qpair failed and we were unable to recover it. 00:26:52.665 [2024-11-07 10:55:20.326418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.665 [2024-11-07 10:55:20.326479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.665 [2024-11-07 10:55:20.326493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.665 [2024-11-07 10:55:20.326500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.665 [2024-11-07 10:55:20.326507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.666 [2024-11-07 10:55:20.326522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.666 qpair failed and we were unable to recover it. 00:26:52.925 [2024-11-07 10:55:20.336448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.925 [2024-11-07 10:55:20.336507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.925 [2024-11-07 10:55:20.336521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.925 [2024-11-07 10:55:20.336531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.925 [2024-11-07 10:55:20.336538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.925 [2024-11-07 10:55:20.336553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.925 qpair failed and we were unable to recover it. 00:26:52.925 [2024-11-07 10:55:20.346474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.925 [2024-11-07 10:55:20.346530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.925 [2024-11-07 10:55:20.346544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.925 [2024-11-07 10:55:20.346550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.925 [2024-11-07 10:55:20.346556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.346572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.356527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.356584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.356597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.356604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.356610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.356625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.366515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.366573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.366587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.366594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.366600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.366614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.376547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.376610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.376625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.376631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.376637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.376656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.386577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.386631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.386645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.386651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.386657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.386672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.396529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.396594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.396608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.396614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.396620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.396635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.406646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.406702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.406716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.406723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.406729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.406743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.416701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.416764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.416777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.416784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.416790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.416805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.426689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.426749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.426763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.426770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.426777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.426792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.436729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.436785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.436799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.436806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.436812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.436827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.926 qpair failed and we were unable to recover it. 00:26:52.926 [2024-11-07 10:55:20.446725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.926 [2024-11-07 10:55:20.446781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.926 [2024-11-07 10:55:20.446794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.926 [2024-11-07 10:55:20.446801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.926 [2024-11-07 10:55:20.446808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.926 [2024-11-07 10:55:20.446823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.456787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.456841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.456855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.456862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.456868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.456883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.466808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.466864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.466879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.466889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.466895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.466910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.476883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.476942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.476957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.476964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.476970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.476985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.486897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.486968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.486981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.486988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.486994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.487009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.496895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.496957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.496970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.496977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.496983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.496998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.506915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.506973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.506986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.506993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.506999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.507017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.516952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.517025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.517041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.517049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.517055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.517071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.527018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.527078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.527092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.527099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.527105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.527119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.537005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.537062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.537076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.537083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.537089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.537103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.547034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.547089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.547103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.547109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.547115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.547130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.927 [2024-11-07 10:55:20.557042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.927 [2024-11-07 10:55:20.557105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.927 [2024-11-07 10:55:20.557119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.927 [2024-11-07 10:55:20.557126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.927 [2024-11-07 10:55:20.557132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.927 [2024-11-07 10:55:20.557147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.927 qpair failed and we were unable to recover it. 00:26:52.928 [2024-11-07 10:55:20.567046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.928 [2024-11-07 10:55:20.567102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.928 [2024-11-07 10:55:20.567115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.928 [2024-11-07 10:55:20.567122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.928 [2024-11-07 10:55:20.567128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.928 [2024-11-07 10:55:20.567144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.928 qpair failed and we were unable to recover it. 00:26:52.928 [2024-11-07 10:55:20.577131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.928 [2024-11-07 10:55:20.577192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.928 [2024-11-07 10:55:20.577207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.928 [2024-11-07 10:55:20.577214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.928 [2024-11-07 10:55:20.577220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.928 [2024-11-07 10:55:20.577235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.928 qpair failed and we were unable to recover it. 00:26:52.928 [2024-11-07 10:55:20.587096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.928 [2024-11-07 10:55:20.587148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.928 [2024-11-07 10:55:20.587162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.928 [2024-11-07 10:55:20.587168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.928 [2024-11-07 10:55:20.587174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:52.928 [2024-11-07 10:55:20.587189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.928 qpair failed and we were unable to recover it. 00:26:53.187 [2024-11-07 10:55:20.597199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.187 [2024-11-07 10:55:20.597256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.187 [2024-11-07 10:55:20.597273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.187 [2024-11-07 10:55:20.597280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.187 [2024-11-07 10:55:20.597286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.187 [2024-11-07 10:55:20.597302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.187 qpair failed and we were unable to recover it. 00:26:53.187 [2024-11-07 10:55:20.607223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.187 [2024-11-07 10:55:20.607280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.187 [2024-11-07 10:55:20.607294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.187 [2024-11-07 10:55:20.607301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.187 [2024-11-07 10:55:20.607307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.187 [2024-11-07 10:55:20.607322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.187 qpair failed and we were unable to recover it. 00:26:53.187 [2024-11-07 10:55:20.617311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.187 [2024-11-07 10:55:20.617366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.187 [2024-11-07 10:55:20.617380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.187 [2024-11-07 10:55:20.617388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.187 [2024-11-07 10:55:20.617393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.187 [2024-11-07 10:55:20.617408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.187 qpair failed and we were unable to recover it. 00:26:53.187 [2024-11-07 10:55:20.627365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.187 [2024-11-07 10:55:20.627464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.187 [2024-11-07 10:55:20.627479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.187 [2024-11-07 10:55:20.627485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.187 [2024-11-07 10:55:20.627492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.187 [2024-11-07 10:55:20.627507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.187 qpair failed and we were unable to recover it. 00:26:53.187 [2024-11-07 10:55:20.637379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.187 [2024-11-07 10:55:20.637439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.637453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.637459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.637469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.637484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.647421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.647491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.647506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.647512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.647518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.647534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.657391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.657454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.657468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.657475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.657481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.657497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.667403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.667466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.667479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.667486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.667493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.667508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.677428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.677493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.677507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.677514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.677520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.677535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.687388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.687457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.687472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.687479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.687485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.687500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.697497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.697578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.697592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.697599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.697605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.697621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.707544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.707598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.707612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.707619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.707625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.707640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.717534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.717587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.717600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.717607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.717613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.717628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.727567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.727625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.727641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.727648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.727654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.727668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.737643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.737730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.737745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.737752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.737758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.737773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.747633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.188 [2024-11-07 10:55:20.747690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.188 [2024-11-07 10:55:20.747704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.188 [2024-11-07 10:55:20.747710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.188 [2024-11-07 10:55:20.747718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.188 [2024-11-07 10:55:20.747733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.188 qpair failed and we were unable to recover it. 00:26:53.188 [2024-11-07 10:55:20.757656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.757749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.757763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.757770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.757777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.757792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.767686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.767741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.767754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.767761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.767770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.767785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.777719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.777773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.777787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.777794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.777800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.777814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.787739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.787797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.787811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.787818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.787824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.787838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.797786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.797841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.797855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.797862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.797868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.797883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.807803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.807860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.807874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.807881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.807887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.807901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.817839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.817897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.817911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.817918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.817924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.817939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.827863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.827919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.827932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.827939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.827945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.827959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.837819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.837875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.837889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.837896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.837902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.837917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.189 [2024-11-07 10:55:20.847924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.189 [2024-11-07 10:55:20.847982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.189 [2024-11-07 10:55:20.847997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.189 [2024-11-07 10:55:20.848004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.189 [2024-11-07 10:55:20.848010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.189 [2024-11-07 10:55:20.848025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.189 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.857952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.858011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.858029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.858035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.858041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.858056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.868007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.868061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.868075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.868082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.868087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.868102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.878014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.878068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.878081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.878088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.878094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.878109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.888029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.888086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.888099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.888106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.888112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.888126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.898063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.898119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.898133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.898145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.898152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.898167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.908089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.908147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.449 [2024-11-07 10:55:20.908161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.449 [2024-11-07 10:55:20.908167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.449 [2024-11-07 10:55:20.908174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.449 [2024-11-07 10:55:20.908188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.449 qpair failed and we were unable to recover it. 00:26:53.449 [2024-11-07 10:55:20.918116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.449 [2024-11-07 10:55:20.918173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.918186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.918193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.918199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.918213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.928082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.928137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.928151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.928158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.928163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.928179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.938201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.938259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.938273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.938280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.938287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.938305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.948201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.948250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.948264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.948271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.948277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.948292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.958221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.958280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.958294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.958301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.958307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.958322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.968252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.968307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.968321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.968327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.968333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.968348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.978301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.978358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.978373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.978380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.978387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.978402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.988326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.988383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.988398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.988404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.988410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.988426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:20.998338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:20.998390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:20.998404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:20.998411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:20.998417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:20.998431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:21.008424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:21.008498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:21.008512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:21.008519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:21.008525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:21.008540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:21.018414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:21.018476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:21.018490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:21.018497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:21.018503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:21.018518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:21.028441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:21.028497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:21.028511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:21.028521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:21.028528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:21.028544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:21.038459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.450 [2024-11-07 10:55:21.038516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.450 [2024-11-07 10:55:21.038530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.450 [2024-11-07 10:55:21.038536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.450 [2024-11-07 10:55:21.038542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.450 [2024-11-07 10:55:21.038557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.450 qpair failed and we were unable to recover it. 00:26:53.450 [2024-11-07 10:55:21.048498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.048555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.048568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.048575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.048581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.048596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.058455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.058513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.058527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.058534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.058540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.058555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.068547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.068604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.068618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.068624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.068630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.068649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.078560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.078615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.078629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.078636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.078642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.078656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.088600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.088657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.088671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.088678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.088684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.088699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.098666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.098722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.098736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.098743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.098749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.098764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.451 [2024-11-07 10:55:21.108652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.451 [2024-11-07 10:55:21.108707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.451 [2024-11-07 10:55:21.108721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.451 [2024-11-07 10:55:21.108728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.451 [2024-11-07 10:55:21.108734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.451 [2024-11-07 10:55:21.108749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.451 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.118732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.118816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.118830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.118837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.118843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.118858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.128651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.128708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.128722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.128729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.128735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.128749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.138791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.138858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.138872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.138878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.138885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.138899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.148756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.148809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.148823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.148830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.148835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.148850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.158832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.158902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.158920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.158927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.158933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.158948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.168818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.168874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.168888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.168895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.168900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.168916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.178862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.178920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.178934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.178940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.178946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.178961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.188874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.188931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.188944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.188951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.188958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.188973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.198898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.710 [2024-11-07 10:55:21.198952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.710 [2024-11-07 10:55:21.198966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.710 [2024-11-07 10:55:21.198973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.710 [2024-11-07 10:55:21.198982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.710 [2024-11-07 10:55:21.198998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.710 qpair failed and we were unable to recover it. 00:26:53.710 [2024-11-07 10:55:21.208928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.208987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.209001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.209009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.209015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.209030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.218982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.219049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.219063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.219070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.219077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.219092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.228979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.229038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.229051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.229058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.229064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.229080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.239029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.239083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.239097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.239105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.239111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.239126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.249058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.249116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.249130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.249136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.249142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.249157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.259075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.259135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.259148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.259155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.259161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.259176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.269044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.269099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.269112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.269119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.269125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.269140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.279098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.279153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.279167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.279174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.279180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.279195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.289168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.289224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.289241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.289248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.289254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.289269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.299135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.299193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.299208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.299215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.299221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.299237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.309214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.309272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.309287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.309294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.309300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.309315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.319263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.319319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.319332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.319339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.319345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.319360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.329275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.329332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.329346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.329353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.329362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.329377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.339341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.339399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.339413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.339420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.339426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.339446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.349368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.349423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.349441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.349448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.349454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.349469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.359306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.359359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.359374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.359381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.359387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.359402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.711 [2024-11-07 10:55:21.369418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.711 [2024-11-07 10:55:21.369478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.711 [2024-11-07 10:55:21.369493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.711 [2024-11-07 10:55:21.369500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.711 [2024-11-07 10:55:21.369506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.711 [2024-11-07 10:55:21.369521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.711 qpair failed and we were unable to recover it. 00:26:53.970 [2024-11-07 10:55:21.379428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.970 [2024-11-07 10:55:21.379491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.970 [2024-11-07 10:55:21.379505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.970 [2024-11-07 10:55:21.379513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.970 [2024-11-07 10:55:21.379519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.970 [2024-11-07 10:55:21.379535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.970 qpair failed and we were unable to recover it. 00:26:53.970 [2024-11-07 10:55:21.389454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.970 [2024-11-07 10:55:21.389512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.970 [2024-11-07 10:55:21.389526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.970 [2024-11-07 10:55:21.389533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.970 [2024-11-07 10:55:21.389540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.970 [2024-11-07 10:55:21.389555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.970 qpair failed and we were unable to recover it. 00:26:53.970 [2024-11-07 10:55:21.399426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.970 [2024-11-07 10:55:21.399486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.970 [2024-11-07 10:55:21.399500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.970 [2024-11-07 10:55:21.399507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.970 [2024-11-07 10:55:21.399514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.970 [2024-11-07 10:55:21.399529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.970 qpair failed and we were unable to recover it. 00:26:53.970 [2024-11-07 10:55:21.409449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.970 [2024-11-07 10:55:21.409507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.971 [2024-11-07 10:55:21.409521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.971 [2024-11-07 10:55:21.409528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.971 [2024-11-07 10:55:21.409535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8970000b90 00:26:53.971 [2024-11-07 10:55:21.409550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:53.971 qpair failed and we were unable to recover it. 00:26:53.971 [2024-11-07 10:55:21.419585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.971 [2024-11-07 10:55:21.419653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.971 [2024-11-07 10:55:21.419680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.971 [2024-11-07 10:55:21.419690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.971 [2024-11-07 10:55:21.419699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f1dbe0 00:26:53.971 [2024-11-07 10:55:21.419721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.971 qpair failed and we were unable to recover it. 00:26:53.971 [2024-11-07 10:55:21.429618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:53.971 [2024-11-07 10:55:21.429675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:53.971 [2024-11-07 10:55:21.429693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:53.971 [2024-11-07 10:55:21.429701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:53.971 [2024-11-07 10:55:21.429709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f1dbe0 00:26:53.971 [2024-11-07 10:55:21.429727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:53.971 qpair failed and we were unable to recover it. 00:26:53.971 [2024-11-07 10:55:21.429814] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:53.971 A controller has encountered a failure and is being reset. 00:26:53.971 Controller properly reset. 00:26:53.971 Initializing NVMe Controllers 00:26:53.971 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.971 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:53.971 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:53.971 Initialization complete. Launching workers. 00:26:53.971 Starting thread on core 1 00:26:53.971 Starting thread on core 2 00:26:53.971 Starting thread on core 3 00:26:53.971 Starting thread on core 0 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:53.971 00:26:53.971 real 0m11.401s 00:26:53.971 user 0m22.053s 00:26:53.971 sys 0m4.658s 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.971 ************************************ 00:26:53.971 END TEST nvmf_target_disconnect_tc2 00:26:53.971 ************************************ 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.971 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.230 rmmod nvme_tcp 00:26:54.230 rmmod nvme_fabrics 00:26:54.230 rmmod nvme_keyring 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2842575 ']' 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2842575 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2842575 ']' 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2842575 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2842575 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2842575' 00:26:54.230 killing process with pid 2842575 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2842575 00:26:54.230 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2842575 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.489 10:55:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.459 10:55:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.459 00:26:56.459 real 0m19.378s 00:26:56.459 user 0m49.442s 00:26:56.459 sys 0m9.015s 00:26:56.459 10:55:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.459 10:55:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:56.459 ************************************ 00:26:56.459 END TEST nvmf_target_disconnect 00:26:56.459 ************************************ 00:26:56.459 10:55:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:56.459 00:26:56.459 real 5m40.035s 00:26:56.459 user 10m24.257s 00:26:56.459 sys 1m51.171s 00:26:56.459 10:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:56.459 10:55:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.459 ************************************ 00:26:56.459 END TEST nvmf_host 00:26:56.459 ************************************ 00:26:56.459 10:55:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:56.459 10:55:24 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:56.459 10:55:24 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:56.459 10:55:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:56.459 10:55:24 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.459 10:55:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.459 ************************************ 00:26:56.459 START TEST nvmf_target_core_interrupt_mode 00:26:56.459 ************************************ 00:26:56.459 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:56.732 * Looking for test storage... 00:26:56.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:56.732 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.732 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.733 --rc genhtml_branch_coverage=1 00:26:56.733 --rc genhtml_function_coverage=1 00:26:56.733 --rc genhtml_legend=1 00:26:56.733 --rc geninfo_all_blocks=1 00:26:56.733 --rc geninfo_unexecuted_blocks=1 00:26:56.733 00:26:56.733 ' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.733 --rc genhtml_branch_coverage=1 00:26:56.733 --rc genhtml_function_coverage=1 00:26:56.733 --rc genhtml_legend=1 00:26:56.733 --rc geninfo_all_blocks=1 00:26:56.733 --rc geninfo_unexecuted_blocks=1 00:26:56.733 00:26:56.733 ' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.733 --rc genhtml_branch_coverage=1 00:26:56.733 --rc genhtml_function_coverage=1 00:26:56.733 --rc genhtml_legend=1 00:26:56.733 --rc geninfo_all_blocks=1 00:26:56.733 --rc geninfo_unexecuted_blocks=1 00:26:56.733 00:26:56.733 ' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.733 --rc genhtml_branch_coverage=1 00:26:56.733 --rc genhtml_function_coverage=1 00:26:56.733 --rc genhtml_legend=1 00:26:56.733 --rc geninfo_all_blocks=1 00:26:56.733 --rc geninfo_unexecuted_blocks=1 00:26:56.733 00:26:56.733 ' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.733 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:56.734 ************************************ 00:26:56.734 START TEST nvmf_abort 00:26:56.734 ************************************ 00:26:56.734 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:56.994 * Looking for test storage... 00:26:56.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:56.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.994 --rc genhtml_branch_coverage=1 00:26:56.994 --rc genhtml_function_coverage=1 00:26:56.994 --rc genhtml_legend=1 00:26:56.994 --rc geninfo_all_blocks=1 00:26:56.994 --rc geninfo_unexecuted_blocks=1 00:26:56.994 00:26:56.994 ' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:56.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.994 --rc genhtml_branch_coverage=1 00:26:56.994 --rc genhtml_function_coverage=1 00:26:56.994 --rc genhtml_legend=1 00:26:56.994 --rc geninfo_all_blocks=1 00:26:56.994 --rc geninfo_unexecuted_blocks=1 00:26:56.994 00:26:56.994 ' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:56.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.994 --rc genhtml_branch_coverage=1 00:26:56.994 --rc genhtml_function_coverage=1 00:26:56.994 --rc genhtml_legend=1 00:26:56.994 --rc geninfo_all_blocks=1 00:26:56.994 --rc geninfo_unexecuted_blocks=1 00:26:56.994 00:26:56.994 ' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:56.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.994 --rc genhtml_branch_coverage=1 00:26:56.994 --rc genhtml_function_coverage=1 00:26:56.994 --rc genhtml_legend=1 00:26:56.994 --rc geninfo_all_blocks=1 00:26:56.994 --rc geninfo_unexecuted_blocks=1 00:26:56.994 00:26:56.994 ' 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:56.994 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.995 10:55:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.270 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:02.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:02.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:02.271 Found net devices under 0000:86:00.0: cvl_0_0 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:02.271 Found net devices under 0000:86:00.1: cvl_0_1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:02.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:27:02.271 00:27:02.271 --- 10.0.0.2 ping statistics --- 00:27:02.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.271 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:02.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:27:02.271 00:27:02.271 --- 10.0.0.1 ping statistics --- 00:27:02.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.271 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2847195 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2847195 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2847195 ']' 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:02.271 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:02.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:02.272 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:02.272 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.272 10:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:02.532 [2024-11-07 10:55:29.966774] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:02.532 [2024-11-07 10:55:29.967718] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:27:02.532 [2024-11-07 10:55:29.967755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:02.532 [2024-11-07 10:55:30.034815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:02.532 [2024-11-07 10:55:30.081276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:02.532 [2024-11-07 10:55:30.081311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:02.532 [2024-11-07 10:55:30.081319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:02.532 [2024-11-07 10:55:30.081326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:02.532 [2024-11-07 10:55:30.081331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:02.532 [2024-11-07 10:55:30.082629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:02.532 [2024-11-07 10:55:30.082651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:02.532 [2024-11-07 10:55:30.082653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:02.532 [2024-11-07 10:55:30.150069] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:02.532 [2024-11-07 10:55:30.150146] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:02.532 [2024-11-07 10:55:30.150499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:02.532 [2024-11-07 10:55:30.150522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:02.532 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:02.532 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:27:02.532 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:02.532 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:02.532 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 [2024-11-07 10:55:30.207417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 Malloc0 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 Delay0 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.790 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.791 [2024-11-07 10:55:30.275367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.791 10:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:02.791 [2024-11-07 10:55:30.391140] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:05.324 Initializing NVMe Controllers 00:27:05.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:05.324 controller IO queue size 128 less than required 00:27:05.324 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:05.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:05.324 Initialization complete. Launching workers. 00:27:05.324 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36810 00:27:05.324 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36867, failed to submit 66 00:27:05.324 success 36810, unsuccessful 57, failed 0 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.324 rmmod nvme_tcp 00:27:05.324 rmmod nvme_fabrics 00:27:05.324 rmmod nvme_keyring 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2847195 ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2847195 ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2847195' 00:27:05.324 killing process with pid 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2847195 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.324 10:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.230 00:27:07.230 real 0m10.462s 00:27:07.230 user 0m10.063s 00:27:07.230 sys 0m5.204s 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.230 ************************************ 00:27:07.230 END TEST nvmf_abort 00:27:07.230 ************************************ 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:07.230 ************************************ 00:27:07.230 START TEST nvmf_ns_hotplug_stress 00:27:07.230 ************************************ 00:27:07.230 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:07.489 * Looking for test storage... 00:27:07.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.489 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:07.489 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:07.489 10:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.489 --rc genhtml_branch_coverage=1 00:27:07.489 --rc genhtml_function_coverage=1 00:27:07.489 --rc genhtml_legend=1 00:27:07.489 --rc geninfo_all_blocks=1 00:27:07.489 --rc geninfo_unexecuted_blocks=1 00:27:07.489 00:27:07.489 ' 00:27:07.489 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.490 --rc genhtml_branch_coverage=1 00:27:07.490 --rc genhtml_function_coverage=1 00:27:07.490 --rc genhtml_legend=1 00:27:07.490 --rc geninfo_all_blocks=1 00:27:07.490 --rc geninfo_unexecuted_blocks=1 00:27:07.490 00:27:07.490 ' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.490 --rc genhtml_branch_coverage=1 00:27:07.490 --rc genhtml_function_coverage=1 00:27:07.490 --rc genhtml_legend=1 00:27:07.490 --rc geninfo_all_blocks=1 00:27:07.490 --rc geninfo_unexecuted_blocks=1 00:27:07.490 00:27:07.490 ' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:07.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.490 --rc genhtml_branch_coverage=1 00:27:07.490 --rc genhtml_function_coverage=1 00:27:07.490 --rc genhtml_legend=1 00:27:07.490 --rc geninfo_all_blocks=1 00:27:07.490 --rc geninfo_unexecuted_blocks=1 00:27:07.490 00:27:07.490 ' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.490 10:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:12.752 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:12.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:12.753 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:12.753 Found net devices under 0000:86:00.0: cvl_0_0 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:12.753 Found net devices under 0000:86:00.1: cvl_0_1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:12.753 10:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.753 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.753 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.753 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:12.753 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:12.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:27:12.753 00:27:12.753 --- 10.0.0.2 ping statistics --- 00:27:12.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.753 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:27:12.753 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:27:12.753 00:27:12.753 --- 10.0.0.1 ping statistics --- 00:27:12.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.753 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2851089 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2851089 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2851089 ']' 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:12.754 [2024-11-07 10:55:40.175859] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:12.754 [2024-11-07 10:55:40.176784] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:27:12.754 [2024-11-07 10:55:40.176818] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.754 [2024-11-07 10:55:40.245065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:12.754 [2024-11-07 10:55:40.285436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.754 [2024-11-07 10:55:40.285474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.754 [2024-11-07 10:55:40.285482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.754 [2024-11-07 10:55:40.285489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.754 [2024-11-07 10:55:40.285494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.754 [2024-11-07 10:55:40.286954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.754 [2024-11-07 10:55:40.287026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.754 [2024-11-07 10:55:40.287028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.754 [2024-11-07 10:55:40.354528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:12.754 [2024-11-07 10:55:40.354566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:12.754 [2024-11-07 10:55:40.354805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:12.754 [2024-11-07 10:55:40.354864] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:12.754 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:13.013 [2024-11-07 10:55:40.639797] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.013 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:13.271 10:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.529 [2024-11-07 10:55:41.024210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.529 10:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:13.788 10:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:13.788 Malloc0 00:27:13.788 10:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:14.046 Delay0 00:27:14.046 10:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.303 10:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:14.560 NULL1 00:27:14.560 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:14.560 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2851360 00:27:14.560 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:14.560 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:14.560 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.817 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.074 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:15.074 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:15.332 true 00:27:15.332 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:15.332 10:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.589 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.589 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:15.589 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:15.846 true 00:27:15.846 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:15.846 10:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.217 Read completed with error (sct=0, sc=11) 00:27:17.217 10:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.217 10:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:17.217 10:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:17.474 true 00:27:17.474 10:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:17.474 10:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.731 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.731 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:17.731 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:17.988 true 00:27:17.988 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:17.989 10:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.357 10:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:19.358 10:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:19.358 10:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:19.615 true 00:27:19.615 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:19.615 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.546 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:20.546 10:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.546 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:20.546 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:20.803 true 00:27:20.804 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:20.804 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.061 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.061 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:21.061 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:21.318 true 00:27:21.318 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:21.318 10:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 10:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.688 10:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:22.688 10:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:22.945 true 00:27:22.945 10:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:22.945 10:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.876 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.876 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:23.876 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:24.134 true 00:27:24.134 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:24.134 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.391 10:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.648 10:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:24.648 10:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:24.648 true 00:27:24.648 10:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:24.648 10:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:26.019 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:26.019 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:26.276 true 00:27:26.276 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:26.276 10:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:27.208 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.208 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:27.208 10:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:27.465 true 00:27:27.465 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:27.465 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.723 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.980 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:27.980 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:27.980 true 00:27:27.980 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:27.980 10:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 10:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:29.351 10:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:29.351 10:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:29.608 true 00:27:29.608 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:29.608 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.540 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:30.541 10:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.541 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:30.541 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:30.798 true 00:27:30.798 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:30.798 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.055 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.312 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:31.312 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:31.312 true 00:27:31.312 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:31.312 10:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.680 10:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:32.680 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:32.680 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:32.680 true 00:27:32.680 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:32.680 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.937 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:33.194 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:33.194 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:33.451 true 00:27:33.451 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:33.451 10:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:34.383 10:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:34.641 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:34.641 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:34.897 true 00:27:34.897 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:34.897 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:34.897 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:35.154 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:35.154 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:35.411 true 00:27:35.411 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:35.411 10:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:36.782 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:36.782 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:37.040 true 00:27:37.040 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:37.040 10:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.971 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:37.971 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:37.971 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:37.971 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:38.229 true 00:27:38.229 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:38.229 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.486 10:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:38.486 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:38.486 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:38.744 true 00:27:38.744 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:38.744 10:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.113 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.113 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:40.113 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:40.113 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:40.370 true 00:27:40.370 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:40.370 10:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.627 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:40.884 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:40.884 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:40.884 true 00:27:40.884 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:40.884 10:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:42.254 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:42.254 10:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:42.511 true 00:27:42.511 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:42.511 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.438 10:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:43.438 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:43.438 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:43.695 true 00:27:43.695 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:43.695 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:43.952 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:44.209 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:44.209 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:44.209 true 00:27:44.209 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:44.209 10:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.578 10:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.578 Initializing NVMe Controllers 00:27:45.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:45.578 Controller IO queue size 128, less than required. 00:27:45.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.578 Controller IO queue size 128, less than required. 00:27:45.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:45.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:45.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:45.578 Initialization complete. Launching workers. 00:27:45.578 ======================================================== 00:27:45.578 Latency(us) 00:27:45.578 Device Information : IOPS MiB/s Average min max 00:27:45.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1667.96 0.81 49443.53 2986.42 1013082.95 00:27:45.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16582.91 8.10 7698.18 1262.76 383379.96 00:27:45.578 ======================================================== 00:27:45.578 Total : 18250.88 8.91 11513.33 1262.76 1013082.95 00:27:45.578 00:27:45.578 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:45.578 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:45.578 true 00:27:45.835 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2851360 00:27:45.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2851360) - No such process 00:27:45.835 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2851360 00:27:45.835 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:45.835 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:46.092 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:46.092 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:46.092 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:46.092 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.092 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:46.349 null0 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:46.349 null1 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.349 10:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:46.606 null2 00:27:46.606 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.606 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.606 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:46.863 null3 00:27:46.863 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:46.863 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:46.863 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:47.121 null4 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:47.121 null5 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:47.121 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:47.378 null6 00:27:47.378 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:47.378 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:47.378 10:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:47.635 null7 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2856696 2856698 2856699 2856702 2856703 2856705 2856707 2856709 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:47.636 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.637 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:47.894 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:47.895 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.153 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:48.412 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.413 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.413 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.413 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.413 10:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.671 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:48.930 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.187 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.188 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:49.445 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.445 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:49.445 10:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:49.445 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:49.445 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.445 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.445 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.445 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.702 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:49.960 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:49.961 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:50.219 10:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:50.477 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:50.734 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:51.046 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.305 10:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:51.569 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.841 rmmod nvme_tcp 00:27:51.841 rmmod nvme_fabrics 00:27:51.841 rmmod nvme_keyring 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2851089 ']' 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2851089 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2851089 ']' 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2851089 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2851089 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2851089' 00:27:51.841 killing process with pid 2851089 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2851089 00:27:51.841 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2851089 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.109 10:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.011 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.011 00:27:54.011 real 0m46.773s 00:27:54.011 user 2m57.995s 00:27:54.011 sys 0m19.191s 00:27:54.011 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:54.011 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:54.011 ************************************ 00:27:54.011 END TEST nvmf_ns_hotplug_stress 00:27:54.011 ************************************ 00:27:54.269 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:54.269 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:54.269 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:54.269 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:54.269 ************************************ 00:27:54.269 START TEST nvmf_delete_subsystem 00:27:54.269 ************************************ 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:54.270 * Looking for test storage... 00:27:54.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.270 --rc genhtml_branch_coverage=1 00:27:54.270 --rc genhtml_function_coverage=1 00:27:54.270 --rc genhtml_legend=1 00:27:54.270 --rc geninfo_all_blocks=1 00:27:54.270 --rc geninfo_unexecuted_blocks=1 00:27:54.270 00:27:54.270 ' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.270 --rc genhtml_branch_coverage=1 00:27:54.270 --rc genhtml_function_coverage=1 00:27:54.270 --rc genhtml_legend=1 00:27:54.270 --rc geninfo_all_blocks=1 00:27:54.270 --rc geninfo_unexecuted_blocks=1 00:27:54.270 00:27:54.270 ' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.270 --rc genhtml_branch_coverage=1 00:27:54.270 --rc genhtml_function_coverage=1 00:27:54.270 --rc genhtml_legend=1 00:27:54.270 --rc geninfo_all_blocks=1 00:27:54.270 --rc geninfo_unexecuted_blocks=1 00:27:54.270 00:27:54.270 ' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:54.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.270 --rc genhtml_branch_coverage=1 00:27:54.270 --rc genhtml_function_coverage=1 00:27:54.270 --rc genhtml_legend=1 00:27:54.270 --rc geninfo_all_blocks=1 00:27:54.270 --rc geninfo_unexecuted_blocks=1 00:27:54.270 00:27:54.270 ' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.270 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.271 10:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.538 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:59.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:59.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:59.539 Found net devices under 0000:86:00.0: cvl_0_0 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:59.539 Found net devices under 0000:86:00.1: cvl_0_1 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.539 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:27:59.799 00:27:59.799 --- 10.0.0.2 ping statistics --- 00:27:59.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.799 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:27:59.799 00:27:59.799 --- 10.0.0.1 ping statistics --- 00:27:59.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.799 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2861070 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2861070 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2861070 ']' 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:59.799 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:59.799 [2024-11-07 10:56:27.439762] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:59.799 [2024-11-07 10:56:27.440704] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:27:59.799 [2024-11-07 10:56:27.440742] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.058 [2024-11-07 10:56:27.508671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:00.058 [2024-11-07 10:56:27.548789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.058 [2024-11-07 10:56:27.548826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.058 [2024-11-07 10:56:27.548834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.058 [2024-11-07 10:56:27.548840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.058 [2024-11-07 10:56:27.548845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.058 [2024-11-07 10:56:27.550006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.058 [2024-11-07 10:56:27.550009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.058 [2024-11-07 10:56:27.616736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:00.058 [2024-11-07 10:56:27.617130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:00.058 [2024-11-07 10:56:27.617141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.058 [2024-11-07 10:56:27.686618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.058 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.059 [2024-11-07 10:56:27.711116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.059 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.317 NULL1 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.317 Delay0 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2861090 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:00.317 10:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:00.317 [2024-11-07 10:56:27.806746] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:02.217 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.217 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.217 10:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 [2024-11-07 10:56:30.047826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51680 is same with the state(6) to be set 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Write completed with error (sct=0, sc=8) 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.475 starting I/O failed: -6 00:28:02.475 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 starting I/O failed: -6 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 starting I/O failed: -6 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 starting I/O failed: -6 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 [2024-11-07 10:56:30.048192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4778000c40 is same with the state(6) to be set 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:02.476 Write completed with error (sct=0, sc=8) 00:28:02.476 Read completed with error (sct=0, sc=8) 00:28:03.411 [2024-11-07 10:56:31.024975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b529b0 is same with the state(6) to be set 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 [2024-11-07 10:56:31.049732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51860 is same with the state(6) to be set 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 [2024-11-07 10:56:31.049921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b514a0 is same with the state(6) to be set 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 [2024-11-07 10:56:31.050091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b512c0 is same with the state(6) to be set 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Write completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 Read completed with error (sct=0, sc=8) 00:28:03.411 [2024-11-07 10:56:31.050862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f477800d350 is same with the state(6) to be set 00:28:03.411 Initializing NVMe Controllers 00:28:03.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.411 Controller IO queue size 128, less than required. 00:28:03.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:03.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:03.412 Initialization complete. Launching workers. 00:28:03.412 ======================================================== 00:28:03.412 Latency(us) 00:28:03.412 Device Information : IOPS MiB/s Average min max 00:28:03.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.61 0.10 943273.34 1109.15 1010712.10 00:28:03.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.87 0.08 867088.00 241.69 1011688.88 00:28:03.412 ======================================================== 00:28:03.412 Total : 353.48 0.17 909246.74 241.69 1011688.88 00:28:03.412 00:28:03.412 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.412 [2024-11-07 10:56:31.051783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b529b0 (9): Bad file descriptor 00:28:03.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:03.412 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:03.412 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2861090 00:28:03.412 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2861090 00:28:03.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2861090) - No such process 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2861090 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2861090 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2861090 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 [2024-11-07 10:56:31.583036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2861777 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:03.980 10:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:04.238 [2024-11-07 10:56:31.652600] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:04.496 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:04.496 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:04.496 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.061 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.061 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:05.062 10:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:05.627 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:05.627 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:05.627 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:06.194 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:06.194 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:06.194 10:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:06.759 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:06.759 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:06.759 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:07.017 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:07.017 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:07.017 10:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:07.275 Initializing NVMe Controllers 00:28:07.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.275 Controller IO queue size 128, less than required. 00:28:07.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:07.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:07.275 Initialization complete. Launching workers. 00:28:07.275 ======================================================== 00:28:07.275 Latency(us) 00:28:07.275 Device Information : IOPS MiB/s Average min max 00:28:07.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003027.85 1000168.34 1011005.93 00:28:07.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1008159.80 1000228.03 1044477.45 00:28:07.275 ======================================================== 00:28:07.275 Total : 256.00 0.12 1005593.83 1000168.34 1044477.45 00:28:07.275 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2861777 00:28:07.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2861777) - No such process 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2861777 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:07.533 rmmod nvme_tcp 00:28:07.533 rmmod nvme_fabrics 00:28:07.533 rmmod nvme_keyring 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2861070 ']' 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2861070 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2861070 ']' 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2861070 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:28:07.533 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2861070 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2861070' 00:28:07.791 killing process with pid 2861070 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2861070 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2861070 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.791 10:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.324 00:28:10.324 real 0m15.764s 00:28:10.324 user 0m26.267s 00:28:10.324 sys 0m5.715s 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:10.324 ************************************ 00:28:10.324 END TEST nvmf_delete_subsystem 00:28:10.324 ************************************ 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:10.324 ************************************ 00:28:10.324 START TEST nvmf_host_management 00:28:10.324 ************************************ 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:10.324 * Looking for test storage... 00:28:10.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:10.324 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:10.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.325 --rc genhtml_branch_coverage=1 00:28:10.325 --rc genhtml_function_coverage=1 00:28:10.325 --rc genhtml_legend=1 00:28:10.325 --rc geninfo_all_blocks=1 00:28:10.325 --rc geninfo_unexecuted_blocks=1 00:28:10.325 00:28:10.325 ' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:10.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.325 --rc genhtml_branch_coverage=1 00:28:10.325 --rc genhtml_function_coverage=1 00:28:10.325 --rc genhtml_legend=1 00:28:10.325 --rc geninfo_all_blocks=1 00:28:10.325 --rc geninfo_unexecuted_blocks=1 00:28:10.325 00:28:10.325 ' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:10.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.325 --rc genhtml_branch_coverage=1 00:28:10.325 --rc genhtml_function_coverage=1 00:28:10.325 --rc genhtml_legend=1 00:28:10.325 --rc geninfo_all_blocks=1 00:28:10.325 --rc geninfo_unexecuted_blocks=1 00:28:10.325 00:28:10.325 ' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:10.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.325 --rc genhtml_branch_coverage=1 00:28:10.325 --rc genhtml_function_coverage=1 00:28:10.325 --rc genhtml_legend=1 00:28:10.325 --rc geninfo_all_blocks=1 00:28:10.325 --rc geninfo_unexecuted_blocks=1 00:28:10.325 00:28:10.325 ' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:10.325 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:10.326 10:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.589 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:15.589 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.590 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:15.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:28:15.590 00:28:15.590 --- 10.0.0.2 ping statistics --- 00:28:15.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.590 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:28:15.590 00:28:15.590 --- 10.0.0.1 ping statistics --- 00:28:15.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.590 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2865753 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2865753 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2865753 ']' 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.590 [2024-11-07 10:56:42.764667] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:15.590 [2024-11-07 10:56:42.765716] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:15.590 [2024-11-07 10:56:42.765758] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.590 [2024-11-07 10:56:42.832858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:15.590 [2024-11-07 10:56:42.877534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.590 [2024-11-07 10:56:42.877572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.590 [2024-11-07 10:56:42.877579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.590 [2024-11-07 10:56:42.877585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.590 [2024-11-07 10:56:42.877590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.590 [2024-11-07 10:56:42.879153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.590 [2024-11-07 10:56:42.879241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.590 [2024-11-07 10:56:42.879351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.590 [2024-11-07 10:56:42.879351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:15.590 [2024-11-07 10:56:42.946686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:15.590 [2024-11-07 10:56:42.946835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:15.590 [2024-11-07 10:56:42.947255] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:15.590 [2024-11-07 10:56:42.947293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:15.590 [2024-11-07 10:56:42.947445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.590 10:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.590 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.590 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:15.590 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.590 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.590 [2024-11-07 10:56:43.008036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.591 Malloc0 00:28:15.591 [2024-11-07 10:56:43.080016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2865806 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2865806 /var/tmp/bdevperf.sock 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2865806 ']' 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:15.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:15.591 { 00:28:15.591 "params": { 00:28:15.591 "name": "Nvme$subsystem", 00:28:15.591 "trtype": "$TEST_TRANSPORT", 00:28:15.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:15.591 "adrfam": "ipv4", 00:28:15.591 "trsvcid": "$NVMF_PORT", 00:28:15.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:15.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:15.591 "hdgst": ${hdgst:-false}, 00:28:15.591 "ddgst": ${ddgst:-false} 00:28:15.591 }, 00:28:15.591 "method": "bdev_nvme_attach_controller" 00:28:15.591 } 00:28:15.591 EOF 00:28:15.591 )") 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:15.591 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:15.591 "params": { 00:28:15.591 "name": "Nvme0", 00:28:15.591 "trtype": "tcp", 00:28:15.591 "traddr": "10.0.0.2", 00:28:15.591 "adrfam": "ipv4", 00:28:15.591 "trsvcid": "4420", 00:28:15.591 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:15.591 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:15.591 "hdgst": false, 00:28:15.591 "ddgst": false 00:28:15.591 }, 00:28:15.591 "method": "bdev_nvme_attach_controller" 00:28:15.591 }' 00:28:15.591 [2024-11-07 10:56:43.174950] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:15.591 [2024-11-07 10:56:43.174998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865806 ] 00:28:15.591 [2024-11-07 10:56:43.237902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.849 [2024-11-07 10:56:43.279825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.108 Running I/O for 10 seconds... 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:16.108 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:16.368 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:16.368 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:16.369 [2024-11-07 10:56:43.995776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8ec0 is same with the state(6) to be set 00:28:16.369 [2024-11-07 10:56:43.995820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8ec0 is same with the state(6) to be set 00:28:16.369 [2024-11-07 10:56:43.995828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8ec0 is same with the state(6) to be set 00:28:16.369 [2024-11-07 10:56:43.995835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8ec0 is same with the state(6) to be set 00:28:16.369 [2024-11-07 10:56:43.995841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dc8ec0 is same with the state(6) to be set 00:28:16.369 10:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.369 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:16.369 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.369 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:16.369 [2024-11-07 10:56:44.004450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.369 [2024-11-07 10:56:44.004484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.369 [2024-11-07 10:56:44.004503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.369 [2024-11-07 10:56:44.004518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.369 [2024-11-07 10:56:44.004533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069510 is same with the state(6) to be set 00:28:16.369 [2024-11-07 10:56:44.004577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.369 [2024-11-07 10:56:44.004912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.369 [2024-11-07 10:56:44.004920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.004927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.004937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.004954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.004961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.004969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.004976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.004984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.004991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.370 [2024-11-07 10:56:44.005506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.370 [2024-11-07 10:56:44.005513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.005521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-11-07 10:56:44.005528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.005536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-11-07 10:56:44.005544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.005552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-11-07 10:56:44.005559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.005568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-11-07 10:56:44.005575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.005583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.371 [2024-11-07 10:56:44.005590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.371 [2024-11-07 10:56:44.006557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:16.371 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:16.371 00:28:16.371 Latency(us) 00:28:16.371 [2024-11-07T09:56:44.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:16.371 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:16.371 Verification LBA range: start 0x0 length 0x400 00:28:16.371 Nvme0n1 : 0.41 1878.21 117.39 156.52 0.00 30609.91 1681.14 27582.11 00:28:16.371 [2024-11-07T09:56:44.042Z] =================================================================================================================== 00:28:16.371 [2024-11-07T09:56:44.042Z] Total : 1878.21 117.39 156.52 0.00 30609.91 1681.14 27582.11 00:28:16.371 [2024-11-07 10:56:44.008949] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:16.371 [2024-11-07 10:56:44.008971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069510 (9): Bad file descriptor 00:28:16.371 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.371 10:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:16.629 [2024-11-07 10:56:44.059827] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2865806 00:28:17.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2865806) - No such process 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:17.563 { 00:28:17.563 "params": { 00:28:17.563 "name": "Nvme$subsystem", 00:28:17.563 "trtype": "$TEST_TRANSPORT", 00:28:17.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.563 "adrfam": "ipv4", 00:28:17.563 "trsvcid": "$NVMF_PORT", 00:28:17.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.563 "hdgst": ${hdgst:-false}, 00:28:17.563 "ddgst": ${ddgst:-false} 00:28:17.563 }, 00:28:17.563 "method": "bdev_nvme_attach_controller" 00:28:17.563 } 00:28:17.563 EOF 00:28:17.563 )") 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:17.563 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:17.564 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:17.564 10:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:17.564 "params": { 00:28:17.564 "name": "Nvme0", 00:28:17.564 "trtype": "tcp", 00:28:17.564 "traddr": "10.0.0.2", 00:28:17.564 "adrfam": "ipv4", 00:28:17.564 "trsvcid": "4420", 00:28:17.564 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.564 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:17.564 "hdgst": false, 00:28:17.564 "ddgst": false 00:28:17.564 }, 00:28:17.564 "method": "bdev_nvme_attach_controller" 00:28:17.564 }' 00:28:17.564 [2024-11-07 10:56:45.069036] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:17.564 [2024-11-07 10:56:45.069082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2866055 ] 00:28:17.564 [2024-11-07 10:56:45.131612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.564 [2024-11-07 10:56:45.171289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.822 Running I/O for 1 seconds... 00:28:18.756 1920.00 IOPS, 120.00 MiB/s 00:28:18.756 Latency(us) 00:28:18.756 [2024-11-07T09:56:46.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.756 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:18.756 Verification LBA range: start 0x0 length 0x400 00:28:18.756 Nvme0n1 : 1.00 1979.07 123.69 0.00 0.00 31826.31 7351.43 27354.16 00:28:18.756 [2024-11-07T09:56:46.427Z] =================================================================================================================== 00:28:18.756 [2024-11-07T09:56:46.427Z] Total : 1979.07 123.69 0.00 0.00 31826.31 7351.43 27354.16 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.015 rmmod nvme_tcp 00:28:19.015 rmmod nvme_fabrics 00:28:19.015 rmmod nvme_keyring 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2865753 ']' 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2865753 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2865753 ']' 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2865753 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2865753 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2865753' 00:28:19.015 killing process with pid 2865753 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2865753 00:28:19.015 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2865753 00:28:19.275 [2024-11-07 10:56:46.765157] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.275 10:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.214 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:21.496 00:28:21.496 real 0m11.320s 00:28:21.496 user 0m17.363s 00:28:21.496 sys 0m5.654s 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:21.496 ************************************ 00:28:21.496 END TEST nvmf_host_management 00:28:21.496 ************************************ 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:21.496 ************************************ 00:28:21.496 START TEST nvmf_lvol 00:28:21.496 ************************************ 00:28:21.496 10:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:21.496 * Looking for test storage... 00:28:21.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.496 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:21.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.497 --rc genhtml_branch_coverage=1 00:28:21.497 --rc genhtml_function_coverage=1 00:28:21.497 --rc genhtml_legend=1 00:28:21.497 --rc geninfo_all_blocks=1 00:28:21.497 --rc geninfo_unexecuted_blocks=1 00:28:21.497 00:28:21.497 ' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:21.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.497 --rc genhtml_branch_coverage=1 00:28:21.497 --rc genhtml_function_coverage=1 00:28:21.497 --rc genhtml_legend=1 00:28:21.497 --rc geninfo_all_blocks=1 00:28:21.497 --rc geninfo_unexecuted_blocks=1 00:28:21.497 00:28:21.497 ' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:21.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.497 --rc genhtml_branch_coverage=1 00:28:21.497 --rc genhtml_function_coverage=1 00:28:21.497 --rc genhtml_legend=1 00:28:21.497 --rc geninfo_all_blocks=1 00:28:21.497 --rc geninfo_unexecuted_blocks=1 00:28:21.497 00:28:21.497 ' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:21.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.497 --rc genhtml_branch_coverage=1 00:28:21.497 --rc genhtml_function_coverage=1 00:28:21.497 --rc genhtml_legend=1 00:28:21.497 --rc geninfo_all_blocks=1 00:28:21.497 --rc geninfo_unexecuted_blocks=1 00:28:21.497 00:28:21.497 ' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.497 10:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:28.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:28.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:28.062 Found net devices under 0000:86:00.0: cvl_0_0 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.062 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:28.063 Found net devices under 0000:86:00.1: cvl_0_1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:28:28.063 00:28:28.063 --- 10.0.0.2 ping statistics --- 00:28:28.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.063 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:28:28.063 00:28:28.063 --- 10.0.0.1 ping statistics --- 00:28:28.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.063 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2869808 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2869808 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2869808 ']' 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:28.063 10:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:28.063 [2024-11-07 10:56:54.908856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:28.063 [2024-11-07 10:56:54.909737] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:28.063 [2024-11-07 10:56:54.909769] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.063 [2024-11-07 10:56:54.978411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.063 [2024-11-07 10:56:55.021508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.063 [2024-11-07 10:56:55.021547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.063 [2024-11-07 10:56:55.021556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.063 [2024-11-07 10:56:55.021564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.063 [2024-11-07 10:56:55.021569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.063 [2024-11-07 10:56:55.022965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.063 [2024-11-07 10:56:55.022982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.063 [2024-11-07 10:56:55.022988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.063 [2024-11-07 10:56:55.090889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:28.063 [2024-11-07 10:56:55.090978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:28.063 [2024-11-07 10:56:55.091013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:28.063 [2024-11-07 10:56:55.091168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:28.063 [2024-11-07 10:56:55.327803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:28.063 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:28.322 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:28.322 10:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:28.581 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:28.581 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f396f1e4-3545-4a13-82b7-f8ee32e8dbc2 00:28:28.581 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f396f1e4-3545-4a13-82b7-f8ee32e8dbc2 lvol 20 00:28:28.839 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=20cf1a7f-002c-4acd-98fc-b8f62ce06b3f 00:28:28.839 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:29.098 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20cf1a7f-002c-4acd-98fc-b8f62ce06b3f 00:28:29.098 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.356 [2024-11-07 10:56:56.931687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.356 10:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:29.614 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2870295 00:28:29.614 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:29.614 10:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:30.548 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 20cf1a7f-002c-4acd-98fc-b8f62ce06b3f MY_SNAPSHOT 00:28:30.807 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=61a4b0f4-53be-45ad-aec7-6f51bd0a63cd 00:28:30.807 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 20cf1a7f-002c-4acd-98fc-b8f62ce06b3f 30 00:28:31.065 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 61a4b0f4-53be-45ad-aec7-6f51bd0a63cd MY_CLONE 00:28:31.323 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=40b01177-4310-495d-bd74-daa8c8c8212f 00:28:31.323 10:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 40b01177-4310-495d-bd74-daa8c8c8212f 00:28:31.888 10:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2870295 00:28:39.996 Initializing NVMe Controllers 00:28:39.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:39.996 Controller IO queue size 128, less than required. 00:28:39.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:39.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:39.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:39.996 Initialization complete. Launching workers. 00:28:39.996 ======================================================== 00:28:39.996 Latency(us) 00:28:39.996 Device Information : IOPS MiB/s Average min max 00:28:39.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12565.60 49.08 10190.73 392.52 62835.86 00:28:39.996 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12457.00 48.66 10280.67 4490.86 66607.92 00:28:39.996 ======================================================== 00:28:39.996 Total : 25022.60 97.74 10235.51 392.52 66607.92 00:28:39.996 00:28:39.996 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:40.254 10:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 20cf1a7f-002c-4acd-98fc-b8f62ce06b3f 00:28:40.512 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f396f1e4-3545-4a13-82b7-f8ee32e8dbc2 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.769 rmmod nvme_tcp 00:28:40.769 rmmod nvme_fabrics 00:28:40.769 rmmod nvme_keyring 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2869808 ']' 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2869808 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2869808 ']' 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2869808 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.769 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2869808 00:28:40.770 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:40.770 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:40.770 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2869808' 00:28:40.770 killing process with pid 2869808 00:28:40.770 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2869808 00:28:40.770 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2869808 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.028 10:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.560 00:28:43.560 real 0m21.672s 00:28:43.560 user 0m55.625s 00:28:43.560 sys 0m9.742s 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:43.560 ************************************ 00:28:43.560 END TEST nvmf_lvol 00:28:43.560 ************************************ 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.560 ************************************ 00:28:43.560 START TEST nvmf_lvs_grow 00:28:43.560 ************************************ 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:43.560 * Looking for test storage... 00:28:43.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:43.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.560 --rc genhtml_branch_coverage=1 00:28:43.560 --rc genhtml_function_coverage=1 00:28:43.560 --rc genhtml_legend=1 00:28:43.560 --rc geninfo_all_blocks=1 00:28:43.560 --rc geninfo_unexecuted_blocks=1 00:28:43.560 00:28:43.560 ' 00:28:43.560 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:43.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.560 --rc genhtml_branch_coverage=1 00:28:43.561 --rc genhtml_function_coverage=1 00:28:43.561 --rc genhtml_legend=1 00:28:43.561 --rc geninfo_all_blocks=1 00:28:43.561 --rc geninfo_unexecuted_blocks=1 00:28:43.561 00:28:43.561 ' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.561 --rc genhtml_branch_coverage=1 00:28:43.561 --rc genhtml_function_coverage=1 00:28:43.561 --rc genhtml_legend=1 00:28:43.561 --rc geninfo_all_blocks=1 00:28:43.561 --rc geninfo_unexecuted_blocks=1 00:28:43.561 00:28:43.561 ' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:43.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.561 --rc genhtml_branch_coverage=1 00:28:43.561 --rc genhtml_function_coverage=1 00:28:43.561 --rc genhtml_legend=1 00:28:43.561 --rc geninfo_all_blocks=1 00:28:43.561 --rc geninfo_unexecuted_blocks=1 00:28:43.561 00:28:43.561 ' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.561 10:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.823 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:48.824 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:48.824 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:48.824 Found net devices under 0000:86:00.0: cvl_0_0 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:48.824 Found net devices under 0000:86:00.1: cvl_0_1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:28:48.824 00:28:48.824 --- 10.0.0.2 ping statistics --- 00:28:48.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.824 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:48.824 10:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:28:48.824 00:28:48.824 --- 10.0.0.1 ping statistics --- 00:28:48.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.824 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2875980 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2875980 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2875980 ']' 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:48.824 [2024-11-07 10:57:16.091508] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.824 [2024-11-07 10:57:16.092453] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:48.824 [2024-11-07 10:57:16.092486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.824 [2024-11-07 10:57:16.159560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.824 [2024-11-07 10:57:16.201292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.824 [2024-11-07 10:57:16.201326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.824 [2024-11-07 10:57:16.201333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.824 [2024-11-07 10:57:16.201339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.824 [2024-11-07 10:57:16.201344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.824 [2024-11-07 10:57:16.201922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.824 [2024-11-07 10:57:16.268456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.824 [2024-11-07 10:57:16.268670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.824 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:49.083 [2024-11-07 10:57:16.502365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:49.083 ************************************ 00:28:49.083 START TEST lvs_grow_clean 00:28:49.083 ************************************ 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:49.083 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:49.342 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:49.342 10:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7339bb94-5a19-48ca-b263-3839ed626d0c 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:49.601 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7339bb94-5a19-48ca-b263-3839ed626d0c lvol 150 00:28:49.860 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a82b6723-0780-4cc6-97ec-09cddd78c3ea 00:28:49.860 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:49.860 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:50.119 [2024-11-07 10:57:17.598270] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:50.119 [2024-11-07 10:57:17.598339] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:50.119 true 00:28:50.119 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:28:50.119 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:50.377 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:50.377 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:50.377 10:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a82b6723-0780-4cc6-97ec-09cddd78c3ea 00:28:50.635 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.894 [2024-11-07 10:57:18.354818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.894 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2876474 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2876474 /var/tmp/bdevperf.sock 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2876474 ']' 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:51.153 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:51.153 [2024-11-07 10:57:18.630083] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:28:51.153 [2024-11-07 10:57:18.630135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2876474 ] 00:28:51.153 [2024-11-07 10:57:18.693015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.153 [2024-11-07 10:57:18.735083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.412 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:51.412 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:28:51.412 10:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:51.671 Nvme0n1 00:28:51.671 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:51.671 [ 00:28:51.671 { 00:28:51.671 "name": "Nvme0n1", 00:28:51.671 "aliases": [ 00:28:51.671 "a82b6723-0780-4cc6-97ec-09cddd78c3ea" 00:28:51.671 ], 00:28:51.671 "product_name": "NVMe disk", 00:28:51.671 "block_size": 4096, 00:28:51.671 "num_blocks": 38912, 00:28:51.671 "uuid": "a82b6723-0780-4cc6-97ec-09cddd78c3ea", 00:28:51.671 "numa_id": 1, 00:28:51.671 "assigned_rate_limits": { 00:28:51.671 "rw_ios_per_sec": 0, 00:28:51.671 "rw_mbytes_per_sec": 0, 00:28:51.671 "r_mbytes_per_sec": 0, 00:28:51.671 "w_mbytes_per_sec": 0 00:28:51.671 }, 00:28:51.671 "claimed": false, 00:28:51.671 "zoned": false, 00:28:51.671 "supported_io_types": { 00:28:51.671 "read": true, 00:28:51.671 "write": true, 00:28:51.671 "unmap": true, 00:28:51.671 "flush": true, 00:28:51.671 "reset": true, 00:28:51.671 "nvme_admin": true, 00:28:51.671 "nvme_io": true, 00:28:51.671 "nvme_io_md": false, 00:28:51.671 "write_zeroes": true, 00:28:51.671 "zcopy": false, 00:28:51.671 "get_zone_info": false, 00:28:51.671 "zone_management": false, 00:28:51.671 "zone_append": false, 00:28:51.671 "compare": true, 00:28:51.671 "compare_and_write": true, 00:28:51.671 "abort": true, 00:28:51.671 "seek_hole": false, 00:28:51.671 "seek_data": false, 00:28:51.672 "copy": true, 00:28:51.672 "nvme_iov_md": false 00:28:51.672 }, 00:28:51.672 "memory_domains": [ 00:28:51.672 { 00:28:51.672 "dma_device_id": "system", 00:28:51.672 "dma_device_type": 1 00:28:51.672 } 00:28:51.672 ], 00:28:51.672 "driver_specific": { 00:28:51.672 "nvme": [ 00:28:51.672 { 00:28:51.672 "trid": { 00:28:51.672 "trtype": "TCP", 00:28:51.672 "adrfam": "IPv4", 00:28:51.672 "traddr": "10.0.0.2", 00:28:51.672 "trsvcid": "4420", 00:28:51.672 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:51.672 }, 00:28:51.672 "ctrlr_data": { 00:28:51.672 "cntlid": 1, 00:28:51.672 "vendor_id": "0x8086", 00:28:51.672 "model_number": "SPDK bdev Controller", 00:28:51.672 "serial_number": "SPDK0", 00:28:51.672 "firmware_revision": "25.01", 00:28:51.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.672 "oacs": { 00:28:51.672 "security": 0, 00:28:51.672 "format": 0, 00:28:51.672 "firmware": 0, 00:28:51.672 "ns_manage": 0 00:28:51.672 }, 00:28:51.672 "multi_ctrlr": true, 00:28:51.672 "ana_reporting": false 00:28:51.672 }, 00:28:51.672 "vs": { 00:28:51.672 "nvme_version": "1.3" 00:28:51.672 }, 00:28:51.672 "ns_data": { 00:28:51.672 "id": 1, 00:28:51.672 "can_share": true 00:28:51.672 } 00:28:51.672 } 00:28:51.672 ], 00:28:51.672 "mp_policy": "active_passive" 00:28:51.672 } 00:28:51.672 } 00:28:51.672 ] 00:28:51.931 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2876495 00:28:51.931 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:51.931 10:57:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:51.931 Running I/O for 10 seconds... 00:28:52.864 Latency(us) 00:28:52.864 [2024-11-07T09:57:20.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.864 Nvme0n1 : 1.00 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:28:52.864 [2024-11-07T09:57:20.535Z] =================================================================================================================== 00:28:52.864 [2024-11-07T09:57:20.535Z] Total : 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:28:52.864 00:28:53.799 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:28:53.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.799 Nvme0n1 : 2.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:28:53.799 [2024-11-07T09:57:21.470Z] =================================================================================================================== 00:28:53.799 [2024-11-07T09:57:21.470Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:28:53.799 00:28:54.058 true 00:28:54.058 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:28:54.058 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:54.317 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:54.317 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:54.317 10:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2876495 00:28:54.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.884 Nvme0n1 : 3.00 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:28:54.884 [2024-11-07T09:57:22.555Z] =================================================================================================================== 00:28:54.884 [2024-11-07T09:57:22.555Z] Total : 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:28:54.884 00:28:55.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.819 Nvme0n1 : 4.00 22256.75 86.94 0.00 0.00 0.00 0.00 0.00 00:28:55.819 [2024-11-07T09:57:23.490Z] =================================================================================================================== 00:28:55.819 [2024-11-07T09:57:23.490Z] Total : 22256.75 86.94 0.00 0.00 0.00 0.00 0.00 00:28:55.819 00:28:57.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.195 Nvme0n1 : 5.00 22301.20 87.11 0.00 0.00 0.00 0.00 0.00 00:28:57.195 [2024-11-07T09:57:24.866Z] =================================================================================================================== 00:28:57.195 [2024-11-07T09:57:24.866Z] Total : 22301.20 87.11 0.00 0.00 0.00 0.00 0.00 00:28:57.195 00:28:58.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.131 Nvme0n1 : 6.00 22330.83 87.23 0.00 0.00 0.00 0.00 0.00 00:28:58.131 [2024-11-07T09:57:25.802Z] =================================================================================================================== 00:28:58.131 [2024-11-07T09:57:25.802Z] Total : 22330.83 87.23 0.00 0.00 0.00 0.00 0.00 00:28:58.131 00:28:59.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.066 Nvme0n1 : 7.00 22370.14 87.38 0.00 0.00 0.00 0.00 0.00 00:28:59.066 [2024-11-07T09:57:26.737Z] =================================================================================================================== 00:28:59.066 [2024-11-07T09:57:26.737Z] Total : 22370.14 87.38 0.00 0.00 0.00 0.00 0.00 00:28:59.066 00:29:00.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.002 Nvme0n1 : 8.00 22383.75 87.44 0.00 0.00 0.00 0.00 0.00 00:29:00.002 [2024-11-07T09:57:27.673Z] =================================================================================================================== 00:29:00.002 [2024-11-07T09:57:27.673Z] Total : 22383.75 87.44 0.00 0.00 0.00 0.00 0.00 00:29:00.002 00:29:00.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:00.938 Nvme0n1 : 9.00 22408.44 87.53 0.00 0.00 0.00 0.00 0.00 00:29:00.938 [2024-11-07T09:57:28.609Z] =================================================================================================================== 00:29:00.938 [2024-11-07T09:57:28.609Z] Total : 22408.44 87.53 0.00 0.00 0.00 0.00 0.00 00:29:00.938 00:29:01.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.873 Nvme0n1 : 10.00 22421.90 87.59 0.00 0.00 0.00 0.00 0.00 00:29:01.873 [2024-11-07T09:57:29.544Z] =================================================================================================================== 00:29:01.873 [2024-11-07T09:57:29.544Z] Total : 22421.90 87.59 0.00 0.00 0.00 0.00 0.00 00:29:01.873 00:29:01.873 00:29:01.873 Latency(us) 00:29:01.873 [2024-11-07T09:57:29.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.873 Nvme0n1 : 10.00 22421.50 87.58 0.00 0.00 5705.53 5242.88 14930.81 00:29:01.873 [2024-11-07T09:57:29.544Z] =================================================================================================================== 00:29:01.873 [2024-11-07T09:57:29.544Z] Total : 22421.50 87.58 0.00 0.00 5705.53 5242.88 14930.81 00:29:01.873 { 00:29:01.873 "results": [ 00:29:01.873 { 00:29:01.873 "job": "Nvme0n1", 00:29:01.873 "core_mask": "0x2", 00:29:01.873 "workload": "randwrite", 00:29:01.873 "status": "finished", 00:29:01.873 "queue_depth": 128, 00:29:01.873 "io_size": 4096, 00:29:01.873 "runtime": 10.003034, 00:29:01.873 "iops": 22421.497317713805, 00:29:01.873 "mibps": 87.58397389731955, 00:29:01.873 "io_failed": 0, 00:29:01.873 "io_timeout": 0, 00:29:01.873 "avg_latency_us": 5705.528352010242, 00:29:01.873 "min_latency_us": 5242.88, 00:29:01.873 "max_latency_us": 14930.810434782608 00:29:01.873 } 00:29:01.873 ], 00:29:01.873 "core_count": 1 00:29:01.873 } 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2876474 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2876474 ']' 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2876474 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2876474 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:01.873 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2876474' 00:29:01.873 killing process with pid 2876474 00:29:01.874 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2876474 00:29:01.874 Received shutdown signal, test time was about 10.000000 seconds 00:29:01.874 00:29:01.874 Latency(us) 00:29:01.874 [2024-11-07T09:57:29.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.874 [2024-11-07T09:57:29.545Z] =================================================================================================================== 00:29:01.874 [2024-11-07T09:57:29.545Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.874 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2876474 00:29:02.132 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.391 10:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.649 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:02.649 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:02.649 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:02.649 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:02.649 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:02.907 [2024-11-07 10:57:30.474276] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:02.907 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:03.167 request: 00:29:03.167 { 00:29:03.167 "uuid": "7339bb94-5a19-48ca-b263-3839ed626d0c", 00:29:03.167 "method": "bdev_lvol_get_lvstores", 00:29:03.167 "req_id": 1 00:29:03.167 } 00:29:03.167 Got JSON-RPC error response 00:29:03.167 response: 00:29:03.167 { 00:29:03.167 "code": -19, 00:29:03.167 "message": "No such device" 00:29:03.167 } 00:29:03.167 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:03.167 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.167 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.167 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.167 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:03.426 aio_bdev 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a82b6723-0780-4cc6-97ec-09cddd78c3ea 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=a82b6723-0780-4cc6-97ec-09cddd78c3ea 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:03.426 10:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:03.685 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a82b6723-0780-4cc6-97ec-09cddd78c3ea -t 2000 00:29:03.685 [ 00:29:03.685 { 00:29:03.685 "name": "a82b6723-0780-4cc6-97ec-09cddd78c3ea", 00:29:03.685 "aliases": [ 00:29:03.685 "lvs/lvol" 00:29:03.685 ], 00:29:03.685 "product_name": "Logical Volume", 00:29:03.685 "block_size": 4096, 00:29:03.685 "num_blocks": 38912, 00:29:03.685 "uuid": "a82b6723-0780-4cc6-97ec-09cddd78c3ea", 00:29:03.685 "assigned_rate_limits": { 00:29:03.685 "rw_ios_per_sec": 0, 00:29:03.685 "rw_mbytes_per_sec": 0, 00:29:03.685 "r_mbytes_per_sec": 0, 00:29:03.685 "w_mbytes_per_sec": 0 00:29:03.685 }, 00:29:03.685 "claimed": false, 00:29:03.685 "zoned": false, 00:29:03.685 "supported_io_types": { 00:29:03.685 "read": true, 00:29:03.685 "write": true, 00:29:03.685 "unmap": true, 00:29:03.685 "flush": false, 00:29:03.685 "reset": true, 00:29:03.685 "nvme_admin": false, 00:29:03.685 "nvme_io": false, 00:29:03.685 "nvme_io_md": false, 00:29:03.685 "write_zeroes": true, 00:29:03.685 "zcopy": false, 00:29:03.685 "get_zone_info": false, 00:29:03.685 "zone_management": false, 00:29:03.685 "zone_append": false, 00:29:03.685 "compare": false, 00:29:03.685 "compare_and_write": false, 00:29:03.685 "abort": false, 00:29:03.685 "seek_hole": true, 00:29:03.685 "seek_data": true, 00:29:03.685 "copy": false, 00:29:03.685 "nvme_iov_md": false 00:29:03.685 }, 00:29:03.685 "driver_specific": { 00:29:03.685 "lvol": { 00:29:03.685 "lvol_store_uuid": "7339bb94-5a19-48ca-b263-3839ed626d0c", 00:29:03.685 "base_bdev": "aio_bdev", 00:29:03.685 "thin_provision": false, 00:29:03.685 "num_allocated_clusters": 38, 00:29:03.685 "snapshot": false, 00:29:03.685 "clone": false, 00:29:03.685 "esnap_clone": false 00:29:03.685 } 00:29:03.685 } 00:29:03.685 } 00:29:03.685 ] 00:29:03.685 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:29:03.685 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:03.685 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:03.943 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:03.943 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:03.943 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:04.202 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:04.202 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a82b6723-0780-4cc6-97ec-09cddd78c3ea 00:29:04.461 10:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7339bb94-5a19-48ca-b263-3839ed626d0c 00:29:04.461 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:04.720 00:29:04.720 real 0m15.785s 00:29:04.720 user 0m15.284s 00:29:04.720 sys 0m1.474s 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.720 ************************************ 00:29:04.720 END TEST lvs_grow_clean 00:29:04.720 ************************************ 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.720 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:04.979 ************************************ 00:29:04.979 START TEST lvs_grow_dirty 00:29:04.979 ************************************ 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:04.979 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:05.239 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:05.239 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:05.239 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:05.239 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:05.239 10:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:05.497 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:05.497 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:05.497 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 lvol 150 00:29:05.827 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:05.827 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:05.827 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:05.827 [2024-11-07 10:57:33.414307] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:05.827 [2024-11-07 10:57:33.414468] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:05.827 true 00:29:05.827 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:05.827 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:06.130 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:06.130 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:06.388 10:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:06.388 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.647 [2024-11-07 10:57:34.218766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.647 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2879059 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2879059 /var/tmp/bdevperf.sock 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2879059 ']' 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:06.906 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:06.906 [2024-11-07 10:57:34.480120] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:06.906 [2024-11-07 10:57:34.480173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2879059 ] 00:29:06.906 [2024-11-07 10:57:34.543233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.165 [2024-11-07 10:57:34.586376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.165 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:07.165 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:07.165 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:07.424 Nvme0n1 00:29:07.424 10:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:07.424 [ 00:29:07.424 { 00:29:07.424 "name": "Nvme0n1", 00:29:07.424 "aliases": [ 00:29:07.424 "bb68f939-bf19-43ba-861d-69a56f4ba43f" 00:29:07.424 ], 00:29:07.424 "product_name": "NVMe disk", 00:29:07.424 "block_size": 4096, 00:29:07.424 "num_blocks": 38912, 00:29:07.424 "uuid": "bb68f939-bf19-43ba-861d-69a56f4ba43f", 00:29:07.424 "numa_id": 1, 00:29:07.424 "assigned_rate_limits": { 00:29:07.424 "rw_ios_per_sec": 0, 00:29:07.424 "rw_mbytes_per_sec": 0, 00:29:07.424 "r_mbytes_per_sec": 0, 00:29:07.424 "w_mbytes_per_sec": 0 00:29:07.424 }, 00:29:07.424 "claimed": false, 00:29:07.424 "zoned": false, 00:29:07.424 "supported_io_types": { 00:29:07.424 "read": true, 00:29:07.424 "write": true, 00:29:07.424 "unmap": true, 00:29:07.424 "flush": true, 00:29:07.424 "reset": true, 00:29:07.424 "nvme_admin": true, 00:29:07.424 "nvme_io": true, 00:29:07.424 "nvme_io_md": false, 00:29:07.424 "write_zeroes": true, 00:29:07.424 "zcopy": false, 00:29:07.424 "get_zone_info": false, 00:29:07.424 "zone_management": false, 00:29:07.424 "zone_append": false, 00:29:07.424 "compare": true, 00:29:07.424 "compare_and_write": true, 00:29:07.424 "abort": true, 00:29:07.424 "seek_hole": false, 00:29:07.424 "seek_data": false, 00:29:07.424 "copy": true, 00:29:07.424 "nvme_iov_md": false 00:29:07.424 }, 00:29:07.424 "memory_domains": [ 00:29:07.424 { 00:29:07.424 "dma_device_id": "system", 00:29:07.424 "dma_device_type": 1 00:29:07.424 } 00:29:07.424 ], 00:29:07.424 "driver_specific": { 00:29:07.424 "nvme": [ 00:29:07.424 { 00:29:07.424 "trid": { 00:29:07.424 "trtype": "TCP", 00:29:07.424 "adrfam": "IPv4", 00:29:07.424 "traddr": "10.0.0.2", 00:29:07.424 "trsvcid": "4420", 00:29:07.424 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:07.424 }, 00:29:07.424 "ctrlr_data": { 00:29:07.424 "cntlid": 1, 00:29:07.424 "vendor_id": "0x8086", 00:29:07.424 "model_number": "SPDK bdev Controller", 00:29:07.424 "serial_number": "SPDK0", 00:29:07.424 "firmware_revision": "25.01", 00:29:07.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.424 "oacs": { 00:29:07.424 "security": 0, 00:29:07.424 "format": 0, 00:29:07.424 "firmware": 0, 00:29:07.424 "ns_manage": 0 00:29:07.424 }, 00:29:07.424 "multi_ctrlr": true, 00:29:07.424 "ana_reporting": false 00:29:07.424 }, 00:29:07.424 "vs": { 00:29:07.424 "nvme_version": "1.3" 00:29:07.424 }, 00:29:07.424 "ns_data": { 00:29:07.424 "id": 1, 00:29:07.424 "can_share": true 00:29:07.424 } 00:29:07.424 } 00:29:07.424 ], 00:29:07.424 "mp_policy": "active_passive" 00:29:07.424 } 00:29:07.424 } 00:29:07.424 ] 00:29:07.683 10:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2879075 00:29:07.683 10:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:07.683 10:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:07.683 Running I/O for 10 seconds... 00:29:08.619 Latency(us) 00:29:08.619 [2024-11-07T09:57:36.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.619 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:08.619 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:08.619 [2024-11-07T09:57:36.290Z] =================================================================================================================== 00:29:08.619 [2024-11-07T09:57:36.290Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:08.619 00:29:09.555 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:09.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:09.555 Nvme0n1 : 2.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:09.555 [2024-11-07T09:57:37.226Z] =================================================================================================================== 00:29:09.555 [2024-11-07T09:57:37.226Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:09.555 00:29:09.814 true 00:29:09.814 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:09.814 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:10.073 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:10.073 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:10.073 10:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2879075 00:29:10.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:10.640 Nvme0n1 : 3.00 22267.33 86.98 0.00 0.00 0.00 0.00 0.00 00:29:10.640 [2024-11-07T09:57:38.311Z] =================================================================================================================== 00:29:10.640 [2024-11-07T09:57:38.311Z] Total : 22267.33 86.98 0.00 0.00 0.00 0.00 0.00 00:29:10.640 00:29:11.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:11.576 Nvme0n1 : 4.00 22320.25 87.19 0.00 0.00 0.00 0.00 0.00 00:29:11.576 [2024-11-07T09:57:39.247Z] =================================================================================================================== 00:29:11.576 [2024-11-07T09:57:39.247Z] Total : 22320.25 87.19 0.00 0.00 0.00 0.00 0.00 00:29:11.576 00:29:12.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:12.953 Nvme0n1 : 5.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:12.953 [2024-11-07T09:57:40.624Z] =================================================================================================================== 00:29:12.953 [2024-11-07T09:57:40.624Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:12.953 00:29:13.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:13.888 Nvme0n1 : 6.00 22373.17 87.40 0.00 0.00 0.00 0.00 0.00 00:29:13.888 [2024-11-07T09:57:41.559Z] =================================================================================================================== 00:29:13.888 [2024-11-07T09:57:41.559Z] Total : 22373.17 87.40 0.00 0.00 0.00 0.00 0.00 00:29:13.888 00:29:14.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:14.823 Nvme0n1 : 7.00 22406.43 87.53 0.00 0.00 0.00 0.00 0.00 00:29:14.823 [2024-11-07T09:57:42.494Z] =================================================================================================================== 00:29:14.823 [2024-11-07T09:57:42.494Z] Total : 22406.43 87.53 0.00 0.00 0.00 0.00 0.00 00:29:14.823 00:29:15.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.760 Nvme0n1 : 8.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:15.760 [2024-11-07T09:57:43.431Z] =================================================================================================================== 00:29:15.760 [2024-11-07T09:57:43.431Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:15.760 00:29:16.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:16.696 Nvme0n1 : 9.00 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:29:16.696 [2024-11-07T09:57:44.367Z] =================================================================================================================== 00:29:16.696 [2024-11-07T09:57:44.367Z] Total : 22436.67 87.64 0.00 0.00 0.00 0.00 0.00 00:29:16.696 00:29:17.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.632 Nvme0n1 : 10.00 22453.60 87.71 0.00 0.00 0.00 0.00 0.00 00:29:17.632 [2024-11-07T09:57:45.303Z] =================================================================================================================== 00:29:17.632 [2024-11-07T09:57:45.303Z] Total : 22453.60 87.71 0.00 0.00 0.00 0.00 0.00 00:29:17.632 00:29:17.632 00:29:17.632 Latency(us) 00:29:17.632 [2024-11-07T09:57:45.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:17.632 Nvme0n1 : 10.01 22455.10 87.72 0.00 0.00 5697.31 5071.92 14930.81 00:29:17.632 [2024-11-07T09:57:45.303Z] =================================================================================================================== 00:29:17.632 [2024-11-07T09:57:45.303Z] Total : 22455.10 87.72 0.00 0.00 5697.31 5071.92 14930.81 00:29:17.632 { 00:29:17.632 "results": [ 00:29:17.632 { 00:29:17.632 "job": "Nvme0n1", 00:29:17.632 "core_mask": "0x2", 00:29:17.632 "workload": "randwrite", 00:29:17.632 "status": "finished", 00:29:17.632 "queue_depth": 128, 00:29:17.632 "io_size": 4096, 00:29:17.632 "runtime": 10.005034, 00:29:17.632 "iops": 22455.096104620934, 00:29:17.632 "mibps": 87.71521915867552, 00:29:17.632 "io_failed": 0, 00:29:17.632 "io_timeout": 0, 00:29:17.632 "avg_latency_us": 5697.306953146651, 00:29:17.632 "min_latency_us": 5071.91652173913, 00:29:17.632 "max_latency_us": 14930.810434782608 00:29:17.632 } 00:29:17.632 ], 00:29:17.632 "core_count": 1 00:29:17.632 } 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2879059 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2879059 ']' 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2879059 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:17.632 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2879059 00:29:17.633 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:17.633 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:17.633 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2879059' 00:29:17.633 killing process with pid 2879059 00:29:17.633 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2879059 00:29:17.633 Received shutdown signal, test time was about 10.000000 seconds 00:29:17.633 00:29:17.633 Latency(us) 00:29:17.633 [2024-11-07T09:57:45.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.633 [2024-11-07T09:57:45.304Z] =================================================================================================================== 00:29:17.633 [2024-11-07T09:57:45.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.633 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2879059 00:29:17.889 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:18.147 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:18.405 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:18.405 10:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2875980 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2875980 00:29:18.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2875980 Killed "${NVMF_APP[@]}" "$@" 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.405 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2880903 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2880903 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2880903 ']' 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:18.664 [2024-11-07 10:57:46.114724] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:18.664 [2024-11-07 10:57:46.115660] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:18.664 [2024-11-07 10:57:46.115696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.664 [2024-11-07 10:57:46.181091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.664 [2024-11-07 10:57:46.222503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.664 [2024-11-07 10:57:46.222539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.664 [2024-11-07 10:57:46.222547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.664 [2024-11-07 10:57:46.222553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.664 [2024-11-07 10:57:46.222559] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.664 [2024-11-07 10:57:46.223114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.664 [2024-11-07 10:57:46.290393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.664 [2024-11-07 10:57:46.290613] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.664 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:18.922 [2024-11-07 10:57:46.526198] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:18.922 [2024-11-07 10:57:46.526298] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:18.922 [2024-11-07 10:57:46.526335] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:18.922 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:19.179 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb68f939-bf19-43ba-861d-69a56f4ba43f -t 2000 00:29:19.437 [ 00:29:19.437 { 00:29:19.437 "name": "bb68f939-bf19-43ba-861d-69a56f4ba43f", 00:29:19.437 "aliases": [ 00:29:19.437 "lvs/lvol" 00:29:19.437 ], 00:29:19.437 "product_name": "Logical Volume", 00:29:19.437 "block_size": 4096, 00:29:19.437 "num_blocks": 38912, 00:29:19.437 "uuid": "bb68f939-bf19-43ba-861d-69a56f4ba43f", 00:29:19.437 "assigned_rate_limits": { 00:29:19.437 "rw_ios_per_sec": 0, 00:29:19.437 "rw_mbytes_per_sec": 0, 00:29:19.437 "r_mbytes_per_sec": 0, 00:29:19.437 "w_mbytes_per_sec": 0 00:29:19.437 }, 00:29:19.437 "claimed": false, 00:29:19.437 "zoned": false, 00:29:19.437 "supported_io_types": { 00:29:19.437 "read": true, 00:29:19.437 "write": true, 00:29:19.437 "unmap": true, 00:29:19.437 "flush": false, 00:29:19.437 "reset": true, 00:29:19.437 "nvme_admin": false, 00:29:19.437 "nvme_io": false, 00:29:19.437 "nvme_io_md": false, 00:29:19.437 "write_zeroes": true, 00:29:19.437 "zcopy": false, 00:29:19.437 "get_zone_info": false, 00:29:19.437 "zone_management": false, 00:29:19.437 "zone_append": false, 00:29:19.437 "compare": false, 00:29:19.437 "compare_and_write": false, 00:29:19.437 "abort": false, 00:29:19.437 "seek_hole": true, 00:29:19.437 "seek_data": true, 00:29:19.437 "copy": false, 00:29:19.437 "nvme_iov_md": false 00:29:19.437 }, 00:29:19.437 "driver_specific": { 00:29:19.437 "lvol": { 00:29:19.437 "lvol_store_uuid": "6bbf12e5-f07e-42ea-a3d4-a9f977715496", 00:29:19.437 "base_bdev": "aio_bdev", 00:29:19.437 "thin_provision": false, 00:29:19.437 "num_allocated_clusters": 38, 00:29:19.437 "snapshot": false, 00:29:19.437 "clone": false, 00:29:19.437 "esnap_clone": false 00:29:19.437 } 00:29:19.437 } 00:29:19.437 } 00:29:19.437 ] 00:29:19.437 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:19.437 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:19.437 10:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:19.695 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:19.695 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:19.695 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:19.695 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:19.695 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:19.953 [2024-11-07 10:57:47.499493] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:19.953 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:20.211 request: 00:29:20.211 { 00:29:20.211 "uuid": "6bbf12e5-f07e-42ea-a3d4-a9f977715496", 00:29:20.211 "method": "bdev_lvol_get_lvstores", 00:29:20.211 "req_id": 1 00:29:20.211 } 00:29:20.211 Got JSON-RPC error response 00:29:20.211 response: 00:29:20.211 { 00:29:20.211 "code": -19, 00:29:20.211 "message": "No such device" 00:29:20.211 } 00:29:20.211 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:20.211 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:20.211 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:20.211 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:20.211 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:20.469 aio_bdev 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:20.469 10:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:20.727 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb68f939-bf19-43ba-861d-69a56f4ba43f -t 2000 00:29:20.727 [ 00:29:20.727 { 00:29:20.727 "name": "bb68f939-bf19-43ba-861d-69a56f4ba43f", 00:29:20.727 "aliases": [ 00:29:20.727 "lvs/lvol" 00:29:20.727 ], 00:29:20.727 "product_name": "Logical Volume", 00:29:20.727 "block_size": 4096, 00:29:20.727 "num_blocks": 38912, 00:29:20.727 "uuid": "bb68f939-bf19-43ba-861d-69a56f4ba43f", 00:29:20.727 "assigned_rate_limits": { 00:29:20.727 "rw_ios_per_sec": 0, 00:29:20.727 "rw_mbytes_per_sec": 0, 00:29:20.727 "r_mbytes_per_sec": 0, 00:29:20.727 "w_mbytes_per_sec": 0 00:29:20.727 }, 00:29:20.727 "claimed": false, 00:29:20.727 "zoned": false, 00:29:20.727 "supported_io_types": { 00:29:20.727 "read": true, 00:29:20.727 "write": true, 00:29:20.727 "unmap": true, 00:29:20.727 "flush": false, 00:29:20.727 "reset": true, 00:29:20.727 "nvme_admin": false, 00:29:20.727 "nvme_io": false, 00:29:20.727 "nvme_io_md": false, 00:29:20.727 "write_zeroes": true, 00:29:20.727 "zcopy": false, 00:29:20.727 "get_zone_info": false, 00:29:20.727 "zone_management": false, 00:29:20.727 "zone_append": false, 00:29:20.727 "compare": false, 00:29:20.727 "compare_and_write": false, 00:29:20.727 "abort": false, 00:29:20.727 "seek_hole": true, 00:29:20.727 "seek_data": true, 00:29:20.727 "copy": false, 00:29:20.727 "nvme_iov_md": false 00:29:20.727 }, 00:29:20.727 "driver_specific": { 00:29:20.727 "lvol": { 00:29:20.727 "lvol_store_uuid": "6bbf12e5-f07e-42ea-a3d4-a9f977715496", 00:29:20.727 "base_bdev": "aio_bdev", 00:29:20.727 "thin_provision": false, 00:29:20.727 "num_allocated_clusters": 38, 00:29:20.727 "snapshot": false, 00:29:20.727 "clone": false, 00:29:20.727 "esnap_clone": false 00:29:20.727 } 00:29:20.727 } 00:29:20.727 } 00:29:20.727 ] 00:29:20.727 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:20.727 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:20.727 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:20.986 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:20.986 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:20.986 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:21.244 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:21.244 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb68f939-bf19-43ba-861d-69a56f4ba43f 00:29:21.502 10:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bbf12e5-f07e-42ea-a3d4-a9f977715496 00:29:21.502 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:21.761 00:29:21.761 real 0m16.934s 00:29:21.761 user 0m34.443s 00:29:21.761 sys 0m3.696s 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:21.761 ************************************ 00:29:21.761 END TEST lvs_grow_dirty 00:29:21.761 ************************************ 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:21.761 nvmf_trace.0 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.761 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.020 rmmod nvme_tcp 00:29:22.020 rmmod nvme_fabrics 00:29:22.020 rmmod nvme_keyring 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2880903 ']' 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2880903 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2880903 ']' 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2880903 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2880903 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2880903' 00:29:22.020 killing process with pid 2880903 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2880903 00:29:22.020 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2880903 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.278 10:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.345 00:29:24.345 real 0m41.114s 00:29:24.345 user 0m51.884s 00:29:24.345 sys 0m9.441s 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:24.345 ************************************ 00:29:24.345 END TEST nvmf_lvs_grow 00:29:24.345 ************************************ 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.345 ************************************ 00:29:24.345 START TEST nvmf_bdev_io_wait 00:29:24.345 ************************************ 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:24.345 * Looking for test storage... 00:29:24.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:24.345 10:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.605 --rc genhtml_branch_coverage=1 00:29:24.605 --rc genhtml_function_coverage=1 00:29:24.605 --rc genhtml_legend=1 00:29:24.605 --rc geninfo_all_blocks=1 00:29:24.605 --rc geninfo_unexecuted_blocks=1 00:29:24.605 00:29:24.605 ' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.605 --rc genhtml_branch_coverage=1 00:29:24.605 --rc genhtml_function_coverage=1 00:29:24.605 --rc genhtml_legend=1 00:29:24.605 --rc geninfo_all_blocks=1 00:29:24.605 --rc geninfo_unexecuted_blocks=1 00:29:24.605 00:29:24.605 ' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.605 --rc genhtml_branch_coverage=1 00:29:24.605 --rc genhtml_function_coverage=1 00:29:24.605 --rc genhtml_legend=1 00:29:24.605 --rc geninfo_all_blocks=1 00:29:24.605 --rc geninfo_unexecuted_blocks=1 00:29:24.605 00:29:24.605 ' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:24.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.605 --rc genhtml_branch_coverage=1 00:29:24.605 --rc genhtml_function_coverage=1 00:29:24.605 --rc genhtml_legend=1 00:29:24.605 --rc geninfo_all_blocks=1 00:29:24.605 --rc geninfo_unexecuted_blocks=1 00:29:24.605 00:29:24.605 ' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.605 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.606 10:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:29.875 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:29.875 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:29.875 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:29.875 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:29.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:29.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:29.876 Found net devices under 0000:86:00.0: cvl_0_0 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:29.876 Found net devices under 0000:86:00.1: cvl_0_1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:29.876 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:29.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:29:29.876 00:29:29.877 --- 10.0.0.2 ping statistics --- 00:29:29.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.877 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:29.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:29:29.877 00:29:29.877 --- 10.0.0.1 ping statistics --- 00:29:29.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.877 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2884941 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2884941 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2884941 ']' 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.877 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:29.877 [2024-11-07 10:57:57.478687] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:29.877 [2024-11-07 10:57:57.479671] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:29.877 [2024-11-07 10:57:57.479711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.136 [2024-11-07 10:57:57.546633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.136 [2024-11-07 10:57:57.591672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.136 [2024-11-07 10:57:57.591710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.136 [2024-11-07 10:57:57.591717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.136 [2024-11-07 10:57:57.591723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.136 [2024-11-07 10:57:57.591728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.136 [2024-11-07 10:57:57.593248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.136 [2024-11-07 10:57:57.593343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.136 [2024-11-07 10:57:57.593439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.136 [2024-11-07 10:57:57.593439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.136 [2024-11-07 10:57:57.593739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 [2024-11-07 10:57:57.739112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.136 [2024-11-07 10:57:57.739223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:30.136 [2024-11-07 10:57:57.739865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:30.136 [2024-11-07 10:57:57.740282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 [2024-11-07 10:57:57.750104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 Malloc0 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.136 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.137 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:30.396 [2024-11-07 10:57:57.806043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2884969 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2884971 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.396 { 00:29:30.396 "params": { 00:29:30.396 "name": "Nvme$subsystem", 00:29:30.396 "trtype": "$TEST_TRANSPORT", 00:29:30.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.396 "adrfam": "ipv4", 00:29:30.396 "trsvcid": "$NVMF_PORT", 00:29:30.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.396 "hdgst": ${hdgst:-false}, 00:29:30.396 "ddgst": ${ddgst:-false} 00:29:30.396 }, 00:29:30.396 "method": "bdev_nvme_attach_controller" 00:29:30.396 } 00:29:30.396 EOF 00:29:30.396 )") 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2884973 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.396 { 00:29:30.396 "params": { 00:29:30.396 "name": "Nvme$subsystem", 00:29:30.396 "trtype": "$TEST_TRANSPORT", 00:29:30.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.396 "adrfam": "ipv4", 00:29:30.396 "trsvcid": "$NVMF_PORT", 00:29:30.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.396 "hdgst": ${hdgst:-false}, 00:29:30.396 "ddgst": ${ddgst:-false} 00:29:30.396 }, 00:29:30.396 "method": "bdev_nvme_attach_controller" 00:29:30.396 } 00:29:30.396 EOF 00:29:30.396 )") 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2884976 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.396 { 00:29:30.396 "params": { 00:29:30.396 "name": "Nvme$subsystem", 00:29:30.396 "trtype": "$TEST_TRANSPORT", 00:29:30.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.396 "adrfam": "ipv4", 00:29:30.396 "trsvcid": "$NVMF_PORT", 00:29:30.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.396 "hdgst": ${hdgst:-false}, 00:29:30.396 "ddgst": ${ddgst:-false} 00:29:30.396 }, 00:29:30.396 "method": "bdev_nvme_attach_controller" 00:29:30.396 } 00:29:30.396 EOF 00:29:30.396 )") 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:30.396 { 00:29:30.396 "params": { 00:29:30.396 "name": "Nvme$subsystem", 00:29:30.396 "trtype": "$TEST_TRANSPORT", 00:29:30.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.396 "adrfam": "ipv4", 00:29:30.396 "trsvcid": "$NVMF_PORT", 00:29:30.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.396 "hdgst": ${hdgst:-false}, 00:29:30.396 "ddgst": ${ddgst:-false} 00:29:30.396 }, 00:29:30.396 "method": "bdev_nvme_attach_controller" 00:29:30.396 } 00:29:30.396 EOF 00:29:30.396 )") 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2884969 00:29:30.396 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.397 "params": { 00:29:30.397 "name": "Nvme1", 00:29:30.397 "trtype": "tcp", 00:29:30.397 "traddr": "10.0.0.2", 00:29:30.397 "adrfam": "ipv4", 00:29:30.397 "trsvcid": "4420", 00:29:30.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.397 "hdgst": false, 00:29:30.397 "ddgst": false 00:29:30.397 }, 00:29:30.397 "method": "bdev_nvme_attach_controller" 00:29:30.397 }' 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.397 "params": { 00:29:30.397 "name": "Nvme1", 00:29:30.397 "trtype": "tcp", 00:29:30.397 "traddr": "10.0.0.2", 00:29:30.397 "adrfam": "ipv4", 00:29:30.397 "trsvcid": "4420", 00:29:30.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.397 "hdgst": false, 00:29:30.397 "ddgst": false 00:29:30.397 }, 00:29:30.397 "method": "bdev_nvme_attach_controller" 00:29:30.397 }' 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.397 "params": { 00:29:30.397 "name": "Nvme1", 00:29:30.397 "trtype": "tcp", 00:29:30.397 "traddr": "10.0.0.2", 00:29:30.397 "adrfam": "ipv4", 00:29:30.397 "trsvcid": "4420", 00:29:30.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.397 "hdgst": false, 00:29:30.397 "ddgst": false 00:29:30.397 }, 00:29:30.397 "method": "bdev_nvme_attach_controller" 00:29:30.397 }' 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:29:30.397 10:57:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:30.397 "params": { 00:29:30.397 "name": "Nvme1", 00:29:30.397 "trtype": "tcp", 00:29:30.397 "traddr": "10.0.0.2", 00:29:30.397 "adrfam": "ipv4", 00:29:30.397 "trsvcid": "4420", 00:29:30.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.397 "hdgst": false, 00:29:30.397 "ddgst": false 00:29:30.397 }, 00:29:30.397 "method": "bdev_nvme_attach_controller" 00:29:30.397 }' 00:29:30.397 [2024-11-07 10:57:57.857248] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:30.397 [2024-11-07 10:57:57.857250] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:30.397 [2024-11-07 10:57:57.857298] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-07 10:57:57.857298] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:29:30.397 --proc-type=auto ] 00:29:30.397 [2024-11-07 10:57:57.859380] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:30.397 [2024-11-07 10:57:57.859428] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:30.397 [2024-11-07 10:57:57.863821] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:30.397 [2024-11-07 10:57:57.863863] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:30.397 [2024-11-07 10:57:58.054718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.656 [2024-11-07 10:57:58.097716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.656 [2024-11-07 10:57:58.146733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.656 [2024-11-07 10:57:58.197212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.656 [2024-11-07 10:57:58.198851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.656 [2024-11-07 10:57:58.240339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.656 [2024-11-07 10:57:58.252513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.656 [2024-11-07 10:57:58.295366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.914 Running I/O for 1 seconds... 00:29:30.914 Running I/O for 1 seconds... 00:29:30.914 Running I/O for 1 seconds... 00:29:30.914 Running I/O for 1 seconds... 00:29:31.851 8285.00 IOPS, 32.36 MiB/s 00:29:31.851 Latency(us) 00:29:31.851 [2024-11-07T09:57:59.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.851 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:31.851 Nvme1n1 : 1.02 8297.35 32.41 0.00 0.00 15311.64 3447.76 22909.11 00:29:31.851 [2024-11-07T09:57:59.522Z] =================================================================================================================== 00:29:31.851 [2024-11-07T09:57:59.522Z] Total : 8297.35 32.41 0.00 0.00 15311.64 3447.76 22909.11 00:29:31.851 11569.00 IOPS, 45.19 MiB/s 00:29:31.851 Latency(us) 00:29:31.851 [2024-11-07T09:57:59.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.851 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:31.851 Nvme1n1 : 1.01 11633.05 45.44 0.00 0.00 10967.76 5014.93 15842.62 00:29:31.851 [2024-11-07T09:57:59.522Z] =================================================================================================================== 00:29:31.851 [2024-11-07T09:57:59.522Z] Total : 11633.05 45.44 0.00 0.00 10967.76 5014.93 15842.62 00:29:31.851 8322.00 IOPS, 32.51 MiB/s 00:29:31.851 Latency(us) 00:29:31.851 [2024-11-07T09:57:59.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.851 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:31.851 Nvme1n1 : 1.00 8418.89 32.89 0.00 0.00 15171.96 3162.82 31001.38 00:29:31.851 [2024-11-07T09:57:59.522Z] =================================================================================================================== 00:29:31.851 [2024-11-07T09:57:59.522Z] Total : 8418.89 32.89 0.00 0.00 15171.96 3162.82 31001.38 00:29:31.851 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2884971 00:29:32.120 246648.00 IOPS, 963.47 MiB/s 00:29:32.120 Latency(us) 00:29:32.120 [2024-11-07T09:57:59.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.120 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:32.120 Nvme1n1 : 1.00 246264.93 961.97 0.00 0.00 517.15 229.73 1538.67 00:29:32.120 [2024-11-07T09:57:59.791Z] =================================================================================================================== 00:29:32.120 [2024-11-07T09:57:59.792Z] Total : 246264.93 961.97 0.00 0.00 517.15 229.73 1538.67 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2884973 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2884976 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.121 rmmod nvme_tcp 00:29:32.121 rmmod nvme_fabrics 00:29:32.121 rmmod nvme_keyring 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2884941 ']' 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2884941 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2884941 ']' 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2884941 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:29:32.121 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:32.381 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2884941 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2884941' 00:29:32.382 killing process with pid 2884941 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2884941 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2884941 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.382 10:57:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.914 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.915 00:29:34.915 real 0m10.192s 00:29:34.915 user 0m14.947s 00:29:34.915 sys 0m6.010s 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:34.915 ************************************ 00:29:34.915 END TEST nvmf_bdev_io_wait 00:29:34.915 ************************************ 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:34.915 ************************************ 00:29:34.915 START TEST nvmf_queue_depth 00:29:34.915 ************************************ 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:34.915 * Looking for test storage... 00:29:34.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.915 --rc genhtml_branch_coverage=1 00:29:34.915 --rc genhtml_function_coverage=1 00:29:34.915 --rc genhtml_legend=1 00:29:34.915 --rc geninfo_all_blocks=1 00:29:34.915 --rc geninfo_unexecuted_blocks=1 00:29:34.915 00:29:34.915 ' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.915 --rc genhtml_branch_coverage=1 00:29:34.915 --rc genhtml_function_coverage=1 00:29:34.915 --rc genhtml_legend=1 00:29:34.915 --rc geninfo_all_blocks=1 00:29:34.915 --rc geninfo_unexecuted_blocks=1 00:29:34.915 00:29:34.915 ' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.915 --rc genhtml_branch_coverage=1 00:29:34.915 --rc genhtml_function_coverage=1 00:29:34.915 --rc genhtml_legend=1 00:29:34.915 --rc geninfo_all_blocks=1 00:29:34.915 --rc geninfo_unexecuted_blocks=1 00:29:34.915 00:29:34.915 ' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.915 --rc genhtml_branch_coverage=1 00:29:34.915 --rc genhtml_function_coverage=1 00:29:34.915 --rc genhtml_legend=1 00:29:34.915 --rc geninfo_all_blocks=1 00:29:34.915 --rc geninfo_unexecuted_blocks=1 00:29:34.915 00:29:34.915 ' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.915 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.916 10:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.181 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:40.182 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:40.182 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:40.182 Found net devices under 0000:86:00.0: cvl_0_0 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:40.182 Found net devices under 0000:86:00.1: cvl_0_1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:29:40.182 00:29:40.182 --- 10.0.0.2 ping statistics --- 00:29:40.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.182 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:29:40.182 00:29:40.182 --- 10.0.0.1 ping statistics --- 00:29:40.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.182 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2888740 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2888740 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2888740 ']' 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.182 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.182 [2024-11-07 10:58:07.489877] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:40.183 [2024-11-07 10:58:07.490829] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:40.183 [2024-11-07 10:58:07.490865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.183 [2024-11-07 10:58:07.559026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.183 [2024-11-07 10:58:07.599798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.183 [2024-11-07 10:58:07.599836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.183 [2024-11-07 10:58:07.599843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.183 [2024-11-07 10:58:07.599849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.183 [2024-11-07 10:58:07.599854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.183 [2024-11-07 10:58:07.600414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.183 [2024-11-07 10:58:07.666007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:40.183 [2024-11-07 10:58:07.666218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 [2024-11-07 10:58:07.732972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 Malloc0 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 [2024-11-07 10:58:07.792965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2888759 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2888759 /var/tmp/bdevperf.sock 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2888759 ']' 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.183 10:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.183 [2024-11-07 10:58:07.841766] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:29:40.183 [2024-11-07 10:58:07.841812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888759 ] 00:29:40.442 [2024-11-07 10:58:07.904326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.442 [2024-11-07 10:58:07.947763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:40.442 NVMe0n1 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:40.442 10:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:40.701 Running I/O for 10 seconds... 00:29:42.571 11264.00 IOPS, 44.00 MiB/s [2024-11-07T09:58:11.620Z] 11777.00 IOPS, 46.00 MiB/s [2024-11-07T09:58:12.557Z] 11925.33 IOPS, 46.58 MiB/s [2024-11-07T09:58:13.493Z] 12016.50 IOPS, 46.94 MiB/s [2024-11-07T09:58:14.429Z] 12069.00 IOPS, 47.14 MiB/s [2024-11-07T09:58:15.366Z] 12093.17 IOPS, 47.24 MiB/s [2024-11-07T09:58:16.303Z] 12129.14 IOPS, 47.38 MiB/s [2024-11-07T09:58:17.240Z] 12152.50 IOPS, 47.47 MiB/s [2024-11-07T09:58:18.617Z] 12169.56 IOPS, 47.54 MiB/s [2024-11-07T09:58:18.617Z] 12170.90 IOPS, 47.54 MiB/s 00:29:50.947 Latency(us) 00:29:50.947 [2024-11-07T09:58:18.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.947 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:50.947 Verification LBA range: start 0x0 length 0x4000 00:29:50.947 NVMe0n1 : 10.07 12190.67 47.62 0.00 0.00 83714.09 19717.79 58811.44 00:29:50.947 [2024-11-07T09:58:18.618Z] =================================================================================================================== 00:29:50.947 [2024-11-07T09:58:18.618Z] Total : 12190.67 47.62 0.00 0.00 83714.09 19717.79 58811.44 00:29:50.947 { 00:29:50.947 "results": [ 00:29:50.947 { 00:29:50.947 "job": "NVMe0n1", 00:29:50.947 "core_mask": "0x1", 00:29:50.947 "workload": "verify", 00:29:50.947 "status": "finished", 00:29:50.947 "verify_range": { 00:29:50.947 "start": 0, 00:29:50.947 "length": 16384 00:29:50.947 }, 00:29:50.947 "queue_depth": 1024, 00:29:50.947 "io_size": 4096, 00:29:50.947 "runtime": 10.065239, 00:29:50.947 "iops": 12190.669292601993, 00:29:50.947 "mibps": 47.619801924226536, 00:29:50.947 "io_failed": 0, 00:29:50.947 "io_timeout": 0, 00:29:50.947 "avg_latency_us": 83714.08824960863, 00:29:50.947 "min_latency_us": 19717.787826086955, 00:29:50.947 "max_latency_us": 58811.43652173913 00:29:50.947 } 00:29:50.947 ], 00:29:50.947 "core_count": 1 00:29:50.947 } 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2888759 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2888759 ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2888759 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2888759 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2888759' 00:29:50.947 killing process with pid 2888759 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2888759 00:29:50.947 Received shutdown signal, test time was about 10.000000 seconds 00:29:50.947 00:29:50.947 Latency(us) 00:29:50.947 [2024-11-07T09:58:18.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.947 [2024-11-07T09:58:18.618Z] =================================================================================================================== 00:29:50.947 [2024-11-07T09:58:18.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2888759 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.947 rmmod nvme_tcp 00:29:50.947 rmmod nvme_fabrics 00:29:50.947 rmmod nvme_keyring 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2888740 ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2888740 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2888740 ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2888740 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.947 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2888740 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2888740' 00:29:51.206 killing process with pid 2888740 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2888740 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2888740 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.206 10:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.745 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:53.745 00:29:53.745 real 0m18.762s 00:29:53.745 user 0m22.204s 00:29:53.745 sys 0m5.776s 00:29:53.745 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:53.745 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:53.745 ************************************ 00:29:53.745 END TEST nvmf_queue_depth 00:29:53.745 ************************************ 00:29:53.746 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:53.746 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:53.746 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:53.746 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:53.746 ************************************ 00:29:53.746 START TEST nvmf_target_multipath 00:29:53.746 ************************************ 00:29:53.746 10:58:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:53.746 * Looking for test storage... 00:29:53.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.746 --rc genhtml_branch_coverage=1 00:29:53.746 --rc genhtml_function_coverage=1 00:29:53.746 --rc genhtml_legend=1 00:29:53.746 --rc geninfo_all_blocks=1 00:29:53.746 --rc geninfo_unexecuted_blocks=1 00:29:53.746 00:29:53.746 ' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.746 --rc genhtml_branch_coverage=1 00:29:53.746 --rc genhtml_function_coverage=1 00:29:53.746 --rc genhtml_legend=1 00:29:53.746 --rc geninfo_all_blocks=1 00:29:53.746 --rc geninfo_unexecuted_blocks=1 00:29:53.746 00:29:53.746 ' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.746 --rc genhtml_branch_coverage=1 00:29:53.746 --rc genhtml_function_coverage=1 00:29:53.746 --rc genhtml_legend=1 00:29:53.746 --rc geninfo_all_blocks=1 00:29:53.746 --rc geninfo_unexecuted_blocks=1 00:29:53.746 00:29:53.746 ' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:53.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:53.746 --rc genhtml_branch_coverage=1 00:29:53.746 --rc genhtml_function_coverage=1 00:29:53.746 --rc genhtml_legend=1 00:29:53.746 --rc geninfo_all_blocks=1 00:29:53.746 --rc geninfo_unexecuted_blocks=1 00:29:53.746 00:29:53.746 ' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.746 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:53.747 10:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:59.020 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.020 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:59.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:59.021 Found net devices under 0000:86:00.0: cvl_0_0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:59.021 Found net devices under 0000:86:00.1: cvl_0_1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:59.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:29:59.021 00:29:59.021 --- 10.0.0.2 ping statistics --- 00:29:59.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.021 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:29:59.021 00:29:59.021 --- 10.0.0.1 ping statistics --- 00:29:59.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.021 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:59.021 only one NIC for nvmf test 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.021 rmmod nvme_tcp 00:29:59.021 rmmod nvme_fabrics 00:29:59.021 rmmod nvme_keyring 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.021 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.022 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.022 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.022 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.022 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.022 10:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.926 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.185 00:30:01.185 real 0m7.628s 00:30:01.185 user 0m1.584s 00:30:01.185 sys 0m4.037s 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:01.185 ************************************ 00:30:01.185 END TEST nvmf_target_multipath 00:30:01.185 ************************************ 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.185 ************************************ 00:30:01.185 START TEST nvmf_zcopy 00:30:01.185 ************************************ 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:01.185 * Looking for test storage... 00:30:01.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:01.185 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:01.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.186 --rc genhtml_branch_coverage=1 00:30:01.186 --rc genhtml_function_coverage=1 00:30:01.186 --rc genhtml_legend=1 00:30:01.186 --rc geninfo_all_blocks=1 00:30:01.186 --rc geninfo_unexecuted_blocks=1 00:30:01.186 00:30:01.186 ' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:01.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.186 --rc genhtml_branch_coverage=1 00:30:01.186 --rc genhtml_function_coverage=1 00:30:01.186 --rc genhtml_legend=1 00:30:01.186 --rc geninfo_all_blocks=1 00:30:01.186 --rc geninfo_unexecuted_blocks=1 00:30:01.186 00:30:01.186 ' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:01.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.186 --rc genhtml_branch_coverage=1 00:30:01.186 --rc genhtml_function_coverage=1 00:30:01.186 --rc genhtml_legend=1 00:30:01.186 --rc geninfo_all_blocks=1 00:30:01.186 --rc geninfo_unexecuted_blocks=1 00:30:01.186 00:30:01.186 ' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:01.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.186 --rc genhtml_branch_coverage=1 00:30:01.186 --rc genhtml_function_coverage=1 00:30:01.186 --rc genhtml_legend=1 00:30:01.186 --rc geninfo_all_blocks=1 00:30:01.186 --rc geninfo_unexecuted_blocks=1 00:30:01.186 00:30:01.186 ' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.186 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.445 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.446 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.446 10:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.718 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:06.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:06.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:06.719 Found net devices under 0000:86:00.0: cvl_0_0 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:06.719 Found net devices under 0000:86:00.1: cvl_0_1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.719 10:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:30:06.719 00:30:06.719 --- 10.0.0.2 ping statistics --- 00:30:06.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.719 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:30:06.719 00:30:06.719 --- 10.0.0.1 ping statistics --- 00:30:06.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.719 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2897185 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2897185 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2897185 ']' 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.719 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 [2024-11-07 10:58:34.085634] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:06.720 [2024-11-07 10:58:34.086564] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:30:06.720 [2024-11-07 10:58:34.086598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.720 [2024-11-07 10:58:34.153143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.720 [2024-11-07 10:58:34.194140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.720 [2024-11-07 10:58:34.194178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.720 [2024-11-07 10:58:34.194185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.720 [2024-11-07 10:58:34.194192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.720 [2024-11-07 10:58:34.194197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.720 [2024-11-07 10:58:34.194747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.720 [2024-11-07 10:58:34.260939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:06.720 [2024-11-07 10:58:34.261163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 [2024-11-07 10:58:34.331189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 [2024-11-07 10:58:34.355391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.720 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.979 malloc0 00:30:06.979 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.979 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:06.979 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.979 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:06.980 { 00:30:06.980 "params": { 00:30:06.980 "name": "Nvme$subsystem", 00:30:06.980 "trtype": "$TEST_TRANSPORT", 00:30:06.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.980 "adrfam": "ipv4", 00:30:06.980 "trsvcid": "$NVMF_PORT", 00:30:06.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.980 "hdgst": ${hdgst:-false}, 00:30:06.980 "ddgst": ${ddgst:-false} 00:30:06.980 }, 00:30:06.980 "method": "bdev_nvme_attach_controller" 00:30:06.980 } 00:30:06.980 EOF 00:30:06.980 )") 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:06.980 10:58:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:06.980 "params": { 00:30:06.980 "name": "Nvme1", 00:30:06.980 "trtype": "tcp", 00:30:06.980 "traddr": "10.0.0.2", 00:30:06.980 "adrfam": "ipv4", 00:30:06.980 "trsvcid": "4420", 00:30:06.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.980 "hdgst": false, 00:30:06.980 "ddgst": false 00:30:06.980 }, 00:30:06.980 "method": "bdev_nvme_attach_controller" 00:30:06.980 }' 00:30:06.980 [2024-11-07 10:58:34.447576] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:30:06.980 [2024-11-07 10:58:34.447622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897330 ] 00:30:06.980 [2024-11-07 10:58:34.509842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.980 [2024-11-07 10:58:34.550888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.239 Running I/O for 10 seconds... 00:30:09.554 8252.00 IOPS, 64.47 MiB/s [2024-11-07T09:58:38.162Z] 8328.00 IOPS, 65.06 MiB/s [2024-11-07T09:58:39.100Z] 8342.00 IOPS, 65.17 MiB/s [2024-11-07T09:58:40.034Z] 8360.00 IOPS, 65.31 MiB/s [2024-11-07T09:58:40.971Z] 8368.80 IOPS, 65.38 MiB/s [2024-11-07T09:58:41.908Z] 8356.33 IOPS, 65.28 MiB/s [2024-11-07T09:58:43.284Z] 8364.57 IOPS, 65.35 MiB/s [2024-11-07T09:58:43.852Z] 8371.50 IOPS, 65.40 MiB/s [2024-11-07T09:58:45.229Z] 8377.89 IOPS, 65.45 MiB/s [2024-11-07T09:58:45.229Z] 8382.00 IOPS, 65.48 MiB/s 00:30:17.558 Latency(us) 00:30:17.558 [2024-11-07T09:58:45.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:17.558 Verification LBA range: start 0x0 length 0x1000 00:30:17.558 Nvme1n1 : 10.01 8382.52 65.49 0.00 0.00 15225.91 1638.40 21313.45 00:30:17.558 [2024-11-07T09:58:45.229Z] =================================================================================================================== 00:30:17.558 [2024-11-07T09:58:45.229Z] Total : 8382.52 65.49 0.00 0.00 15225.91 1638.40 21313.45 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2899024 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.558 { 00:30:17.558 "params": { 00:30:17.558 "name": "Nvme$subsystem", 00:30:17.558 "trtype": "$TEST_TRANSPORT", 00:30:17.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.558 "adrfam": "ipv4", 00:30:17.558 "trsvcid": "$NVMF_PORT", 00:30:17.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.558 "hdgst": ${hdgst:-false}, 00:30:17.558 "ddgst": ${ddgst:-false} 00:30:17.558 }, 00:30:17.558 "method": "bdev_nvme_attach_controller" 00:30:17.558 } 00:30:17.558 EOF 00:30:17.558 )") 00:30:17.558 [2024-11-07 10:58:45.023092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.558 [2024-11-07 10:58:45.023124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:17.558 10:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.558 "params": { 00:30:17.558 "name": "Nvme1", 00:30:17.558 "trtype": "tcp", 00:30:17.558 "traddr": "10.0.0.2", 00:30:17.558 "adrfam": "ipv4", 00:30:17.558 "trsvcid": "4420", 00:30:17.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.558 "hdgst": false, 00:30:17.558 "ddgst": false 00:30:17.558 }, 00:30:17.558 "method": "bdev_nvme_attach_controller" 00:30:17.558 }' 00:30:17.558 [2024-11-07 10:58:45.035060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.558 [2024-11-07 10:58:45.035073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.047054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.047066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.059056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.059066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.065813] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:30:17.559 [2024-11-07 10:58:45.065854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899024 ] 00:30:17.559 [2024-11-07 10:58:45.071057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.071067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.083053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.083063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.095055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.095065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.107054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.107063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.119054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.119063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.128414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.559 [2024-11-07 10:58:45.131052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.131061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.143055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.143076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.155076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.155088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.167055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.167066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.170234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.559 [2024-11-07 10:58:45.179058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.179071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.191062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.191082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.203058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.203072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.559 [2024-11-07 10:58:45.215054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.559 [2024-11-07 10:58:45.215066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.227058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.227069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.239063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.239081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.251062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.251078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.263062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.263080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.275062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.275075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.287063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.287077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.299061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.299074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.311052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.311062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.323051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.323060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.335056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.335070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.347053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.818 [2024-11-07 10:58:45.347062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.818 [2024-11-07 10:58:45.359052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.359061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.371054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.371063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.383054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.383067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.395056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.395065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.407054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.407062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.419054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.419065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.431064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.431081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 Running I/O for 5 seconds... 00:30:17.819 [2024-11-07 10:58:45.443095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.443111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.459335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.459356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.471438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.471456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:17.819 [2024-11-07 10:58:45.485247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:17.819 [2024-11-07 10:58:45.485266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.500803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.500822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.515709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.515727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.527181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.527199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.540876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.540895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.556112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.556131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.571494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.571512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.586940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.586958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.600831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.600849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.615980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.616003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.631409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.631427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.642799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.642817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.657406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.657424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.672057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.672075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.686714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.686733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.700660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.700680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.715699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.715719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.730810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.730830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.078 [2024-11-07 10:58:45.742517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.078 [2024-11-07 10:58:45.742537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.757217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.757237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.772566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.772584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.787379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.787397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.803604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.803623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.819412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.819430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.832194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.832214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.843034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.843053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.857577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.857596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.873105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.873123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.888184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.888211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.903329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.903348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.914112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.914132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.929388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.929408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.944511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.944532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.959731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.959750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.975008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.975028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.337 [2024-11-07 10:58:45.989201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.337 [2024-11-07 10:58:45.989221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.004641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.004661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.019889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.019908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.035113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.035132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.046171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.046191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.061378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.061397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.076683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.076702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.092247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.092266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.107544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.107563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.118548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.118567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.133322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.133341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.148669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.148687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.164126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.164148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.179397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.179415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.191889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.191908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.207423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.207447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.220108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.220126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.234983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.235003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.596 [2024-11-07 10:58:46.248637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.596 [2024-11-07 10:58:46.248655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.263939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.263957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.274864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.274882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.289302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.289320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.304493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.304512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.319464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.319483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.331847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.331866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.347358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.347376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.359647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.359665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.375586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.375605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.387667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.387685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.403373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.403398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.415569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.415586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.429067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.429091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.444893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.444912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 16172.00 IOPS, 126.34 MiB/s [2024-11-07T09:58:46.526Z] [2024-11-07 10:58:46.460590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.460608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.475485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.475503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.487599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.487617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.500761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.500779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:18.855 [2024-11-07 10:58:46.516101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:18.855 [2024-11-07 10:58:46.516120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.531627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.531645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.543507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.543524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.557075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.557093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.572667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.572686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.587863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.587880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.603570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.603587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.615223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.615240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.629335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.629353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.644794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.644812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.114 [2024-11-07 10:58:46.659998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.114 [2024-11-07 10:58:46.660015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.672586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.672604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.687754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.687772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.703735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.703753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.719641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.719659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.735268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.735287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.746337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.746356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.761140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.761158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.115 [2024-11-07 10:58:46.776644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.115 [2024-11-07 10:58:46.776662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.791780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.791798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.807121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.807139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.820781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.820800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.835757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.835775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.846768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.846786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.861663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.861682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.876669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.876687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.891942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.891960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.907319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.907338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.918975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.918995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.933680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.933704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.948503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.948522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.963979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.963997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.978923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.978943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:46.989726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:46.989745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:47.004882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:47.004901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:47.019422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:47.019445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.374 [2024-11-07 10:58:47.031875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.374 [2024-11-07 10:58:47.031894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.044945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.044964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.060487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.060505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.075536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.075554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.091416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.091440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.103886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.103904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.115450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.115467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.128519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.128537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.139591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.139609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.154796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.154816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.168916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.168935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.184268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.184287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.199312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.199331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.211822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.211841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.227667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.227686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.243532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.243551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.255872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.255891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.268693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.268711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.284160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.284178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.633 [2024-11-07 10:58:47.299586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.633 [2024-11-07 10:58:47.299604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.892 [2024-11-07 10:58:47.310624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.892 [2024-11-07 10:58:47.310642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.892 [2024-11-07 10:58:47.324979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.892 [2024-11-07 10:58:47.324998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.892 [2024-11-07 10:58:47.340272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.340291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.355518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.355537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.371654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.371675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.386991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.387012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.400852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.400871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.415999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.416018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.430871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.430890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.445520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.445541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 16197.00 IOPS, 126.54 MiB/s [2024-11-07T09:58:47.564Z] [2024-11-07 10:58:47.460378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.460397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.475523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.475543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.491857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.491876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.507095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.507118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.520700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.520719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.536141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.536159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:19.893 [2024-11-07 10:58:47.551104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:19.893 [2024-11-07 10:58:47.551123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.151 [2024-11-07 10:58:47.562641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.562660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.576966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.576986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.591658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.591677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.607366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.607385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.619405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.619422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.632679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.632698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.643213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.643230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.656959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.656978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.672636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.672655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.686903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.686922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.698262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.698280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.712626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.712644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.727782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.727801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.743251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.743269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.754675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.754703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.769006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.769028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.784006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.784024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.799035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.799053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.152 [2024-11-07 10:58:47.811817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.152 [2024-11-07 10:58:47.811834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.824708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.824726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.840006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.840024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.850680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.850709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.864830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.864848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.879914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.879931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.895262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.895280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.906324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.906342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.920583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.920603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.936250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.936269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.951464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.951484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.964537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.964558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.980252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.980271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:47.995269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:47.995288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:48.007899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:48.007918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:48.023008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:48.023027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:48.034653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:48.034675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:48.049213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:48.049232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.411 [2024-11-07 10:58:48.064028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.411 [2024-11-07 10:58:48.064047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.079311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.079330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.091216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.091234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.104824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.104843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.119938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.119956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.134934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.134952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.148125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.148145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.162914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.162933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.174393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.174412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.189045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.189064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.204220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.204238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.219404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.219422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.232107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.232124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.243808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.243825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.259084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.259101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.271971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.271990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.287210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.287228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.298040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.298059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.312880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.312898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.670 [2024-11-07 10:58:48.327538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.670 [2024-11-07 10:58:48.327555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.343155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.343174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.356521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.356539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.371961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.371978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.387424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.387448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.402715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.402733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.416664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.416682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.431764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.431782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.447876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.447895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 16244.67 IOPS, 126.91 MiB/s [2024-11-07T09:58:48.600Z] [2024-11-07 10:58:48.463412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.463438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.474325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.474344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.489552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.489570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.504213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.504231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.519418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.519441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.535168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.535187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.549009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.549027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.564747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.929 [2024-11-07 10:58:48.564766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.929 [2024-11-07 10:58:48.579851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.930 [2024-11-07 10:58:48.579869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:20.930 [2024-11-07 10:58:48.595360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:20.930 [2024-11-07 10:58:48.595380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.607167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.607186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.621574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.621594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.636892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.636912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.652081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.652101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.667146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.667166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.680471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.680489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.691073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.691093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.705614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.188 [2024-11-07 10:58:48.705633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.188 [2024-11-07 10:58:48.721285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.721305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.736340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.736359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.747723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.747740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.762724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.762743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.777257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.777277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.792760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.792780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.808089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.808108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.823204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.823223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.835005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.835029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.189 [2024-11-07 10:58:48.848735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.189 [2024-11-07 10:58:48.848754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.864137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.864156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.879446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.879465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.894718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.894736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.908497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.908515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.923657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.923676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.939726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.939744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.955951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.955970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.966866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.966885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.980960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.980979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:48.996421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:48.996449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.011714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.011733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.023323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.023341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.036442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.036460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.047722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.047740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.060828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.060846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.076025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.076044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.091242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.091261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.448 [2024-11-07 10:58:49.102091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.448 [2024-11-07 10:58:49.102114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.117848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.117868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.132222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.132240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.147429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.147453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.163149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.163168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.177162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.177181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.192657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.192676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.207534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.207552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.222742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.222761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.235207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.235225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.248823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.248841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.263795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.263813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.278757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.278775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.292857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.292875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.308063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.308081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.322996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.323014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.333856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.333875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.348938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.348957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.708 [2024-11-07 10:58:49.363717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.708 [2024-11-07 10:58:49.363734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.379538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.379560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.395489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.395507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.407225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.407242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.420873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.420891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.436204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.436222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.451571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.451589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 16252.25 IOPS, 126.97 MiB/s [2024-11-07T09:58:49.638Z] [2024-11-07 10:58:49.467832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.467850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.483043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.483061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.494835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.494853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.509610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.509629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.524861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.524879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.540100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.540118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.555236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.555255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.567008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.567026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.967 [2024-11-07 10:58:49.581369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.967 [2024-11-07 10:58:49.581388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.968 [2024-11-07 10:58:49.596389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.968 [2024-11-07 10:58:49.596407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.968 [2024-11-07 10:58:49.611825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.968 [2024-11-07 10:58:49.611844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:21.968 [2024-11-07 10:58:49.627374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:21.968 [2024-11-07 10:58:49.627391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.639304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.639322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.653062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.653081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.668385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.668403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.683363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.683382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.694845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.694863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.709160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.709179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.723988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.724006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.739422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.739444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.755243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.755261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.769055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.769073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.784102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.784120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.799839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.799857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.814865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.814884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.828112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.828131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.839274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.839292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.852876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.852895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.867502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.867519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.879713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.879730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.227 [2024-11-07 10:58:49.892580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.227 [2024-11-07 10:58:49.892598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.907760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.907779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.919632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.919650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.932425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.932454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.947576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.947594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.963177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.963195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.977030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.977049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:49.992102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:49.992122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.486 [2024-11-07 10:58:50.007383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.486 [2024-11-07 10:58:50.007403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.023323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.023342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.034275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.034294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.050086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.050106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.064038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.064057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.074867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.074886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.089578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.089598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.105341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.105361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.120989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.121009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.136460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.136479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.487 [2024-11-07 10:58:50.151462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.487 [2024-11-07 10:58:50.151481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.163888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.163907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.179294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.179313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.191122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.191141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.205212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.205232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.220902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.220922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.236249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.236267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.251571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.251591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.267506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.267526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.283976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.283995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.299219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.299238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.312978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.312998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.327830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.327849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.343092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.343111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.353989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.354008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.369587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.369606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.385200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.385219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:22.745 [2024-11-07 10:58:50.400333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:22.745 [2024-11-07 10:58:50.400352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.004 [2024-11-07 10:58:50.415248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.004 [2024-11-07 10:58:50.415267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.004 [2024-11-07 10:58:50.426676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.004 [2024-11-07 10:58:50.426695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.004 [2024-11-07 10:58:50.441232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.004 [2024-11-07 10:58:50.441251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.004 [2024-11-07 10:58:50.456447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.456465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 16254.20 IOPS, 126.99 MiB/s 00:30:23.005 Latency(us) 00:30:23.005 [2024-11-07T09:58:50.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.005 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:23.005 Nvme1n1 : 5.01 16256.24 127.00 0.00 0.00 7866.72 2008.82 13677.08 00:30:23.005 [2024-11-07T09:58:50.676Z] =================================================================================================================== 00:30:23.005 [2024-11-07T09:58:50.676Z] Total : 16256.24 127.00 0.00 0.00 7866.72 2008.82 13677.08 00:30:23.005 [2024-11-07 10:58:50.467062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.467081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.479062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.479078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.491072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.491094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.503063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.503083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.515063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.515079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.527058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.527075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.539059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.539075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.551057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.551071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.563055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.563070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.575053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.575063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.587056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.587067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.599059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.599072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.611053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.611063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 [2024-11-07 10:58:50.623054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:23.005 [2024-11-07 10:58:50.623064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:23.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2899024) - No such process 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2899024 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:23.005 delay0 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.005 10:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:30:23.264 [2024-11-07 10:58:50.756104] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:31.381 Initializing NVMe Controllers 00:30:31.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.381 Initialization complete. Launching workers. 00:30:31.381 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6816 00:30:31.381 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7086, failed to submit 50 00:30:31.381 success 6966, unsuccessful 120, failed 0 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.381 10:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.381 rmmod nvme_tcp 00:30:31.381 rmmod nvme_fabrics 00:30:31.381 rmmod nvme_keyring 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2897185 ']' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2897185 ']' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2897185' 00:30:31.381 killing process with pid 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2897185 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.381 10:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.760 00:30:32.760 real 0m31.662s 00:30:32.760 user 0m41.763s 00:30:32.760 sys 0m12.407s 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:32.760 ************************************ 00:30:32.760 END TEST nvmf_zcopy 00:30:32.760 ************************************ 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.760 ************************************ 00:30:32.760 START TEST nvmf_nmic 00:30:32.760 ************************************ 00:30:32.760 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:33.020 * Looking for test storage... 00:30:33.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.020 --rc genhtml_branch_coverage=1 00:30:33.020 --rc genhtml_function_coverage=1 00:30:33.020 --rc genhtml_legend=1 00:30:33.020 --rc geninfo_all_blocks=1 00:30:33.020 --rc geninfo_unexecuted_blocks=1 00:30:33.020 00:30:33.020 ' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.020 --rc genhtml_branch_coverage=1 00:30:33.020 --rc genhtml_function_coverage=1 00:30:33.020 --rc genhtml_legend=1 00:30:33.020 --rc geninfo_all_blocks=1 00:30:33.020 --rc geninfo_unexecuted_blocks=1 00:30:33.020 00:30:33.020 ' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.020 --rc genhtml_branch_coverage=1 00:30:33.020 --rc genhtml_function_coverage=1 00:30:33.020 --rc genhtml_legend=1 00:30:33.020 --rc geninfo_all_blocks=1 00:30:33.020 --rc geninfo_unexecuted_blocks=1 00:30:33.020 00:30:33.020 ' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:33.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.020 --rc genhtml_branch_coverage=1 00:30:33.020 --rc genhtml_function_coverage=1 00:30:33.020 --rc genhtml_legend=1 00:30:33.020 --rc geninfo_all_blocks=1 00:30:33.020 --rc geninfo_unexecuted_blocks=1 00:30:33.020 00:30:33.020 ' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.020 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:33.021 10:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.286 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.287 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.287 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.287 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.546 10:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.546 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.546 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.546 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:30:38.546 00:30:38.546 --- 10.0.0.2 ping statistics --- 00:30:38.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.546 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:30:38.547 00:30:38.547 --- 10.0.0.1 ping statistics --- 00:30:38.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.547 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2904513 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2904513 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2904513 ']' 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:38.547 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.547 [2024-11-07 10:59:06.110641] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.547 [2024-11-07 10:59:06.111581] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:30:38.547 [2024-11-07 10:59:06.111616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.547 [2024-11-07 10:59:06.181248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:38.806 [2024-11-07 10:59:06.225668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.806 [2024-11-07 10:59:06.225709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.806 [2024-11-07 10:59:06.225716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.806 [2024-11-07 10:59:06.225722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.806 [2024-11-07 10:59:06.225727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.806 [2024-11-07 10:59:06.227241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.806 [2024-11-07 10:59:06.227263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:38.806 [2024-11-07 10:59:06.227346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:38.806 [2024-11-07 10:59:06.227348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.806 [2024-11-07 10:59:06.293688] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.806 [2024-11-07 10:59:06.293895] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:38.806 [2024-11-07 10:59:06.293978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:38.806 [2024-11-07 10:59:06.294083] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.806 [2024-11-07 10:59:06.294257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.806 [2024-11-07 10:59:06.359984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.806 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.806 Malloc0 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 [2024-11-07 10:59:06.420011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:38.807 test case1: single bdev can't be used in multiple subsystems 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 [2024-11-07 10:59:06.447750] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:38.807 [2024-11-07 10:59:06.447771] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:38.807 [2024-11-07 10:59:06.447779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:38.807 request: 00:30:38.807 { 00:30:38.807 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:38.807 "namespace": { 00:30:38.807 "bdev_name": "Malloc0", 00:30:38.807 "no_auto_visible": false 00:30:38.807 }, 00:30:38.807 "method": "nvmf_subsystem_add_ns", 00:30:38.807 "req_id": 1 00:30:38.807 } 00:30:38.807 Got JSON-RPC error response 00:30:38.807 response: 00:30:38.807 { 00:30:38.807 "code": -32602, 00:30:38.807 "message": "Invalid parameters" 00:30:38.807 } 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:38.807 Adding namespace failed - expected result. 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:38.807 test case2: host connect to nvmf target in multiple paths 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:38.807 [2024-11-07 10:59:06.459856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:38.807 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:39.066 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:39.325 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:39.325 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:30:39.325 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:39.325 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:30:39.325 10:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:30:41.858 10:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:41.858 [global] 00:30:41.858 thread=1 00:30:41.858 invalidate=1 00:30:41.858 rw=write 00:30:41.858 time_based=1 00:30:41.858 runtime=1 00:30:41.858 ioengine=libaio 00:30:41.858 direct=1 00:30:41.858 bs=4096 00:30:41.858 iodepth=1 00:30:41.858 norandommap=0 00:30:41.858 numjobs=1 00:30:41.858 00:30:41.858 verify_dump=1 00:30:41.858 verify_backlog=512 00:30:41.858 verify_state_save=0 00:30:41.858 do_verify=1 00:30:41.858 verify=crc32c-intel 00:30:41.858 [job0] 00:30:41.858 filename=/dev/nvme0n1 00:30:41.858 Could not set queue depth (nvme0n1) 00:30:41.858 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:41.858 fio-3.35 00:30:41.858 Starting 1 thread 00:30:42.795 00:30:42.795 job0: (groupid=0, jobs=1): err= 0: pid=2905207: Thu Nov 7 10:59:10 2024 00:30:42.795 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:30:42.795 slat (nsec): min=10105, max=27166, avg=23043.82, stdev=3127.36 00:30:42.795 clat (usec): min=40863, max=41112, avg=40968.84, stdev=66.98 00:30:42.795 lat (usec): min=40886, max=41123, avg=40991.88, stdev=65.84 00:30:42.795 clat percentiles (usec): 00:30:42.795 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:42.795 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:42.795 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:42.795 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:42.795 | 99.99th=[41157] 00:30:42.795 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:30:42.795 slat (usec): min=11, max=27000, avg=65.61, stdev=1192.69 00:30:42.795 clat (usec): min=119, max=348, avg=148.97, stdev=13.78 00:30:42.795 lat (usec): min=150, max=27313, avg=214.58, stdev=1200.01 00:30:42.795 clat percentiles (usec): 00:30:42.795 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 145], 00:30:42.795 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 147], 60.00th=[ 149], 00:30:42.795 | 70.00th=[ 149], 80.00th=[ 151], 90.00th=[ 153], 95.00th=[ 159], 00:30:42.795 | 99.00th=[ 198], 99.50th=[ 233], 99.90th=[ 351], 99.95th=[ 351], 00:30:42.795 | 99.99th=[ 351] 00:30:42.795 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:30:42.795 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:42.795 lat (usec) : 250=95.51%, 500=0.37% 00:30:42.795 lat (msec) : 50=4.12% 00:30:42.795 cpu : usr=0.49%, sys=0.99%, ctx=537, majf=0, minf=1 00:30:42.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:42.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:42.795 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:42.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:42.795 00:30:42.795 Run status group 0 (all jobs): 00:30:42.795 READ: bw=86.8KiB/s (88.9kB/s), 86.8KiB/s-86.8KiB/s (88.9kB/s-88.9kB/s), io=88.0KiB (90.1kB), run=1014-1014msec 00:30:42.795 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:30:42.795 00:30:42.795 Disk stats (read/write): 00:30:42.795 nvme0n1: ios=45/512, merge=0/0, ticks=1763/65, in_queue=1828, util=98.40% 00:30:42.795 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:43.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.054 rmmod nvme_tcp 00:30:43.054 rmmod nvme_fabrics 00:30:43.054 rmmod nvme_keyring 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2904513 ']' 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2904513 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2904513 ']' 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2904513 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2904513 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2904513' 00:30:43.054 killing process with pid 2904513 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2904513 00:30:43.054 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2904513 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.314 10:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.851 00:30:45.851 real 0m12.500s 00:30:45.851 user 0m23.550s 00:30:45.851 sys 0m5.628s 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:45.851 ************************************ 00:30:45.851 END TEST nvmf_nmic 00:30:45.851 ************************************ 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.851 ************************************ 00:30:45.851 START TEST nvmf_fio_target 00:30:45.851 ************************************ 00:30:45.851 10:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:45.851 * Looking for test storage... 00:30:45.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.851 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:45.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.852 --rc genhtml_branch_coverage=1 00:30:45.852 --rc genhtml_function_coverage=1 00:30:45.852 --rc genhtml_legend=1 00:30:45.852 --rc geninfo_all_blocks=1 00:30:45.852 --rc geninfo_unexecuted_blocks=1 00:30:45.852 00:30:45.852 ' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:45.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.852 --rc genhtml_branch_coverage=1 00:30:45.852 --rc genhtml_function_coverage=1 00:30:45.852 --rc genhtml_legend=1 00:30:45.852 --rc geninfo_all_blocks=1 00:30:45.852 --rc geninfo_unexecuted_blocks=1 00:30:45.852 00:30:45.852 ' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:45.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.852 --rc genhtml_branch_coverage=1 00:30:45.852 --rc genhtml_function_coverage=1 00:30:45.852 --rc genhtml_legend=1 00:30:45.852 --rc geninfo_all_blocks=1 00:30:45.852 --rc geninfo_unexecuted_blocks=1 00:30:45.852 00:30:45.852 ' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:45.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.852 --rc genhtml_branch_coverage=1 00:30:45.852 --rc genhtml_function_coverage=1 00:30:45.852 --rc genhtml_legend=1 00:30:45.852 --rc geninfo_all_blocks=1 00:30:45.852 --rc geninfo_unexecuted_blocks=1 00:30:45.852 00:30:45.852 ' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.852 10:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.125 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.125 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.125 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.125 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.125 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:51.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:51.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:51.126 Found net devices under 0000:86:00.0: cvl_0_0 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:51.126 Found net devices under 0000:86:00.1: cvl_0_1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:30:51.126 00:30:51.126 --- 10.0.0.2 ping statistics --- 00:30:51.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.126 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:30:51.126 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:30:51.127 00:30:51.127 --- 10.0.0.1 ping statistics --- 00:30:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.127 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2908741 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2908741 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2908741 ']' 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:51.127 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.127 [2024-11-07 10:59:18.590856] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.127 [2024-11-07 10:59:18.591859] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:30:51.127 [2024-11-07 10:59:18.591901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.127 [2024-11-07 10:59:18.659897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.127 [2024-11-07 10:59:18.702592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.127 [2024-11-07 10:59:18.702631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.127 [2024-11-07 10:59:18.702638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.127 [2024-11-07 10:59:18.702645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.127 [2024-11-07 10:59:18.702650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.127 [2024-11-07 10:59:18.704076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.127 [2024-11-07 10:59:18.704172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.127 [2024-11-07 10:59:18.704261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.127 [2024-11-07 10:59:18.704263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.127 [2024-11-07 10:59:18.772906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.127 [2024-11-07 10:59:18.773017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.127 [2024-11-07 10:59:18.773214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:51.127 [2024-11-07 10:59:18.773495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.127 [2024-11-07 10:59:18.773682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.387 10:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.387 [2024-11-07 10:59:19.012756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.646 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.646 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:51.646 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:51.905 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:51.905 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.164 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:52.164 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.423 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:52.423 10:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:52.423 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.682 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:52.682 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:52.940 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:52.940 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:53.199 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:53.199 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:53.458 10:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:53.458 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:53.458 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:53.785 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:53.785 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:53.785 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.092 [2024-11-07 10:59:21.612916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.092 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:54.373 10:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:30:54.674 10:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:30:57.209 10:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:57.209 [global] 00:30:57.209 thread=1 00:30:57.209 invalidate=1 00:30:57.209 rw=write 00:30:57.209 time_based=1 00:30:57.209 runtime=1 00:30:57.209 ioengine=libaio 00:30:57.209 direct=1 00:30:57.209 bs=4096 00:30:57.209 iodepth=1 00:30:57.209 norandommap=0 00:30:57.209 numjobs=1 00:30:57.209 00:30:57.209 verify_dump=1 00:30:57.209 verify_backlog=512 00:30:57.209 verify_state_save=0 00:30:57.209 do_verify=1 00:30:57.209 verify=crc32c-intel 00:30:57.209 [job0] 00:30:57.209 filename=/dev/nvme0n1 00:30:57.209 [job1] 00:30:57.209 filename=/dev/nvme0n2 00:30:57.209 [job2] 00:30:57.209 filename=/dev/nvme0n3 00:30:57.209 [job3] 00:30:57.209 filename=/dev/nvme0n4 00:30:57.209 Could not set queue depth (nvme0n1) 00:30:57.209 Could not set queue depth (nvme0n2) 00:30:57.209 Could not set queue depth (nvme0n3) 00:30:57.209 Could not set queue depth (nvme0n4) 00:30:57.209 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:57.209 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:57.209 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:57.209 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:57.209 fio-3.35 00:30:57.209 Starting 4 threads 00:30:58.587 00:30:58.587 job0: (groupid=0, jobs=1): err= 0: pid=2909874: Thu Nov 7 10:59:25 2024 00:30:58.587 read: IOPS=1307, BW=5231KiB/s (5356kB/s)(5236KiB/1001msec) 00:30:58.587 slat (nsec): min=4722, max=25344, avg=7618.20, stdev=1873.64 00:30:58.587 clat (usec): min=183, max=41039, avg=539.16, stdev=3393.65 00:30:58.587 lat (usec): min=191, max=41063, avg=546.78, stdev=3394.75 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:30:58.587 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:30:58.587 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 265], 00:30:58.587 | 99.00th=[ 433], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:58.587 | 99.99th=[41157] 00:30:58.587 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:58.587 slat (nsec): min=9641, max=43655, avg=11259.97, stdev=1574.66 00:30:58.587 clat (usec): min=131, max=344, avg=168.53, stdev=18.48 00:30:58.587 lat (usec): min=142, max=388, avg=179.79, stdev=18.64 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:30:58.587 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:30:58.587 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:30:58.587 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 269], 99.95th=[ 347], 00:30:58.587 | 99.99th=[ 347] 00:30:58.587 bw ( KiB/s): min= 8424, max= 8424, per=44.85%, avg=8424.00, stdev= 0.00, samples=1 00:30:58.587 iops : min= 2106, max= 2106, avg=2106.00, stdev= 0.00, samples=1 00:30:58.587 lat (usec) : 250=87.17%, 500=12.37%, 750=0.11% 00:30:58.587 lat (msec) : 20=0.04%, 50=0.32% 00:30:58.587 cpu : usr=1.60%, sys=3.40%, ctx=2847, majf=0, minf=1 00:30:58.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.587 issued rwts: total=1309,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.587 job1: (groupid=0, jobs=1): err= 0: pid=2909875: Thu Nov 7 10:59:25 2024 00:30:58.587 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:30:58.587 slat (nsec): min=10991, max=26674, avg=20409.50, stdev=4426.41 00:30:58.587 clat (usec): min=40857, max=41933, avg=41042.82, stdev=231.90 00:30:58.587 lat (usec): min=40880, max=41956, avg=41063.23, stdev=231.33 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:58.587 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:58.587 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:30:58.587 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:58.587 | 99.99th=[41681] 00:30:58.587 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:30:58.587 slat (nsec): min=11881, max=42079, avg=14169.60, stdev=2998.61 00:30:58.587 clat (usec): min=168, max=331, avg=238.69, stdev=11.76 00:30:58.587 lat (usec): min=184, max=369, avg=252.86, stdev=11.95 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[ 184], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 235], 00:30:58.587 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:30:58.587 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 253], 00:30:58.587 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 330], 99.95th=[ 330], 00:30:58.587 | 99.99th=[ 330] 00:30:58.587 bw ( KiB/s): min= 4096, max= 4096, per=21.81%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.587 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.587 lat (usec) : 250=88.20%, 500=7.68% 00:30:58.587 lat (msec) : 50=4.12% 00:30:58.587 cpu : usr=0.58%, sys=0.39%, ctx=537, majf=0, minf=1 00:30:58.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.587 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.587 job2: (groupid=0, jobs=1): err= 0: pid=2909881: Thu Nov 7 10:59:25 2024 00:30:58.587 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:30:58.587 slat (nsec): min=10536, max=26642, avg=22996.77, stdev=3027.04 00:30:58.587 clat (usec): min=40816, max=41103, avg=40972.92, stdev=79.74 00:30:58.587 lat (usec): min=40839, max=41127, avg=40995.92, stdev=79.00 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:58.587 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:58.587 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:58.587 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:58.587 | 99.99th=[41157] 00:30:58.587 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:30:58.587 slat (nsec): min=10388, max=43481, avg=11996.07, stdev=2086.72 00:30:58.587 clat (usec): min=149, max=331, avg=189.62, stdev=23.42 00:30:58.587 lat (usec): min=162, max=375, avg=201.62, stdev=23.95 00:30:58.587 clat percentiles (usec): 00:30:58.587 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:30:58.587 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:30:58.587 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 233], 00:30:58.587 | 99.00th=[ 297], 99.50th=[ 322], 99.90th=[ 330], 99.95th=[ 330], 00:30:58.587 | 99.99th=[ 330] 00:30:58.587 bw ( KiB/s): min= 4096, max= 4096, per=21.81%, avg=4096.00, stdev= 0.00, samples=1 00:30:58.587 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:58.587 lat (usec) : 250=93.26%, 500=2.62% 00:30:58.587 lat (msec) : 50=4.12% 00:30:58.587 cpu : usr=0.50%, sys=0.89%, ctx=534, majf=0, minf=2 00:30:58.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.588 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.588 job3: (groupid=0, jobs=1): err= 0: pid=2909885: Thu Nov 7 10:59:25 2024 00:30:58.588 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:58.588 slat (nsec): min=7304, max=41551, avg=8573.88, stdev=1840.23 00:30:58.588 clat (usec): min=197, max=41008, avg=254.72, stdev=901.15 00:30:58.588 lat (usec): min=215, max=41016, avg=263.29, stdev=901.15 00:30:58.588 clat percentiles (usec): 00:30:58.588 | 1.00th=[ 215], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:30:58.588 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 235], 00:30:58.588 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 262], 00:30:58.588 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 416], 99.95th=[ 453], 00:30:58.588 | 99.99th=[41157] 00:30:58.588 write: IOPS=2302, BW=9211KiB/s (9432kB/s)(9220KiB/1001msec); 0 zone resets 00:30:58.588 slat (nsec): min=10832, max=45595, avg=12153.10, stdev=1776.05 00:30:58.588 clat (usec): min=143, max=941, avg=181.62, stdev=33.21 00:30:58.588 lat (usec): min=156, max=958, avg=193.78, stdev=33.47 00:30:58.588 clat percentiles (usec): 00:30:58.588 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:30:58.588 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 176], 00:30:58.588 | 70.00th=[ 182], 80.00th=[ 200], 90.00th=[ 237], 95.00th=[ 247], 00:30:58.588 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 310], 00:30:58.588 | 99.99th=[ 938] 00:30:58.588 bw ( KiB/s): min= 8192, max= 8192, per=43.61%, avg=8192.00, stdev= 0.00, samples=1 00:30:58.588 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:58.588 lat (usec) : 250=91.64%, 500=8.32%, 1000=0.02% 00:30:58.588 lat (msec) : 50=0.02% 00:30:58.588 cpu : usr=4.30%, sys=6.40%, ctx=4355, majf=0, minf=1 00:30:58.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.588 issued rwts: total=2048,2305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:58.588 00:30:58.588 Run status group 0 (all jobs): 00:30:58.588 READ: bw=12.8MiB/s (13.4MB/s), 84.9KiB/s-8184KiB/s (87.0kB/s-8380kB/s), io=13.3MiB (13.9MB), run=1001-1036msec 00:30:58.588 WRITE: bw=18.3MiB/s (19.2MB/s), 1977KiB/s-9211KiB/s (2024kB/s-9432kB/s), io=19.0MiB (19.9MB), run=1001-1036msec 00:30:58.588 00:30:58.588 Disk stats (read/write): 00:30:58.588 nvme0n1: ios=1057/1536, merge=0/0, ticks=1517/253, in_queue=1770, util=97.89% 00:30:58.588 nvme0n2: ios=42/512, merge=0/0, ticks=1683/119, in_queue=1802, util=98.27% 00:30:58.588 nvme0n3: ios=18/512, merge=0/0, ticks=738/95, in_queue=833, util=89.03% 00:30:58.588 nvme0n4: ios=1641/2048, merge=0/0, ticks=1344/350, in_queue=1694, util=98.11% 00:30:58.588 10:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:58.588 [global] 00:30:58.588 thread=1 00:30:58.588 invalidate=1 00:30:58.588 rw=randwrite 00:30:58.588 time_based=1 00:30:58.588 runtime=1 00:30:58.588 ioengine=libaio 00:30:58.588 direct=1 00:30:58.588 bs=4096 00:30:58.588 iodepth=1 00:30:58.588 norandommap=0 00:30:58.588 numjobs=1 00:30:58.588 00:30:58.588 verify_dump=1 00:30:58.588 verify_backlog=512 00:30:58.588 verify_state_save=0 00:30:58.588 do_verify=1 00:30:58.588 verify=crc32c-intel 00:30:58.588 [job0] 00:30:58.588 filename=/dev/nvme0n1 00:30:58.588 [job1] 00:30:58.588 filename=/dev/nvme0n2 00:30:58.588 [job2] 00:30:58.588 filename=/dev/nvme0n3 00:30:58.588 [job3] 00:30:58.588 filename=/dev/nvme0n4 00:30:58.588 Could not set queue depth (nvme0n1) 00:30:58.588 Could not set queue depth (nvme0n2) 00:30:58.588 Could not set queue depth (nvme0n3) 00:30:58.588 Could not set queue depth (nvme0n4) 00:30:58.588 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.588 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.588 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.588 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:58.588 fio-3.35 00:30:58.588 Starting 4 threads 00:30:59.966 00:30:59.966 job0: (groupid=0, jobs=1): err= 0: pid=2910274: Thu Nov 7 10:59:27 2024 00:30:59.966 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:30:59.966 slat (nsec): min=9473, max=25878, avg=22334.13, stdev=4813.21 00:30:59.966 clat (usec): min=260, max=41116, avg=39179.82, stdev=8484.78 00:30:59.966 lat (usec): min=282, max=41139, avg=39202.15, stdev=8484.75 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 262], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:59.966 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:59.966 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:59.966 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:59.966 | 99.99th=[41157] 00:30:59.966 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:30:59.966 slat (nsec): min=9796, max=53061, avg=12278.75, stdev=2756.04 00:30:59.966 clat (usec): min=137, max=335, avg=208.32, stdev=36.42 00:30:59.966 lat (usec): min=148, max=368, avg=220.60, stdev=36.74 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 149], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 182], 00:30:59.966 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 208], 00:30:59.966 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 273], 95.00th=[ 289], 00:30:59.966 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 334], 00:30:59.966 | 99.99th=[ 334] 00:30:59.966 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.966 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.966 lat (usec) : 250=84.11%, 500=11.78% 00:30:59.966 lat (msec) : 50=4.11% 00:30:59.966 cpu : usr=0.39%, sys=0.98%, ctx=536, majf=0, minf=1 00:30:59.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.966 job1: (groupid=0, jobs=1): err= 0: pid=2910288: Thu Nov 7 10:59:27 2024 00:30:59.966 read: IOPS=1506, BW=6025KiB/s (6170kB/s)(6176KiB/1025msec) 00:30:59.966 slat (nsec): min=6822, max=23677, avg=7888.80, stdev=1294.34 00:30:59.966 clat (usec): min=182, max=41078, avg=435.02, stdev=2926.58 00:30:59.966 lat (usec): min=190, max=41101, avg=442.91, stdev=2927.56 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 208], 00:30:59.966 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 221], 00:30:59.966 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 249], 95.00th=[ 251], 00:30:59.966 | 99.00th=[ 297], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:30:59.966 | 99.99th=[41157] 00:30:59.966 write: IOPS=1998, BW=7992KiB/s (8184kB/s)(8192KiB/1025msec); 0 zone resets 00:30:59.966 slat (nsec): min=9709, max=36817, avg=11325.62, stdev=1419.86 00:30:59.966 clat (usec): min=117, max=351, avg=149.59, stdev=31.46 00:30:59.966 lat (usec): min=128, max=382, avg=160.92, stdev=31.74 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 130], 00:30:59.966 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:30:59.966 | 70.00th=[ 141], 80.00th=[ 182], 90.00th=[ 204], 95.00th=[ 219], 00:30:59.966 | 99.00th=[ 241], 99.50th=[ 245], 99.90th=[ 273], 99.95th=[ 289], 00:30:59.966 | 99.99th=[ 351] 00:30:59.966 bw ( KiB/s): min= 4096, max=12288, per=51.25%, avg=8192.00, stdev=5792.62, samples=2 00:30:59.966 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:30:59.966 lat (usec) : 250=96.63%, 500=3.15% 00:30:59.966 lat (msec) : 50=0.22% 00:30:59.966 cpu : usr=2.15%, sys=3.22%, ctx=3594, majf=0, minf=1 00:30:59.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 issued rwts: total=1544,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.966 job2: (groupid=0, jobs=1): err= 0: pid=2910306: Thu Nov 7 10:59:27 2024 00:30:59.966 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:30:59.966 slat (nsec): min=12409, max=27738, avg=20660.86, stdev=5691.84 00:30:59.966 clat (usec): min=40811, max=41155, avg=40975.56, stdev=78.39 00:30:59.966 lat (usec): min=40838, max=41182, avg=40996.22, stdev=78.77 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:59.966 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:59.966 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:59.966 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:59.966 | 99.99th=[41157] 00:30:59.966 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:30:59.966 slat (nsec): min=10916, max=40495, avg=13355.58, stdev=2031.95 00:30:59.966 clat (usec): min=157, max=283, avg=186.59, stdev=14.63 00:30:59.966 lat (usec): min=168, max=298, avg=199.95, stdev=15.55 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:30:59.966 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:30:59.966 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 212], 00:30:59.966 | 99.00th=[ 221], 99.50th=[ 245], 99.90th=[ 285], 99.95th=[ 285], 00:30:59.966 | 99.99th=[ 285] 00:30:59.966 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.966 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.966 lat (usec) : 250=95.51%, 500=0.37% 00:30:59.966 lat (msec) : 50=4.12% 00:30:59.966 cpu : usr=0.99%, sys=0.50%, ctx=535, majf=0, minf=1 00:30:59.966 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.966 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.966 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.966 job3: (groupid=0, jobs=1): err= 0: pid=2910311: Thu Nov 7 10:59:27 2024 00:30:59.966 read: IOPS=858, BW=3433KiB/s (3515kB/s)(3436KiB/1001msec) 00:30:59.966 slat (nsec): min=6331, max=27191, avg=7551.34, stdev=1672.69 00:30:59.966 clat (usec): min=188, max=42047, avg=928.96, stdev=5169.14 00:30:59.966 lat (usec): min=195, max=42058, avg=936.51, stdev=5170.14 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 247], 00:30:59.966 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 258], 00:30:59.966 | 70.00th=[ 260], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 412], 00:30:59.966 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:30:59.966 | 99.99th=[42206] 00:30:59.966 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:30:59.966 slat (nsec): min=8952, max=49855, avg=10927.25, stdev=3354.80 00:30:59.966 clat (usec): min=126, max=374, avg=175.91, stdev=38.08 00:30:59.966 lat (usec): min=136, max=403, avg=186.84, stdev=38.80 00:30:59.966 clat percentiles (usec): 00:30:59.966 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:30:59.966 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 176], 60.00th=[ 194], 00:30:59.966 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 227], 95.00th=[ 241], 00:30:59.966 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 293], 99.95th=[ 375], 00:30:59.967 | 99.99th=[ 375] 00:30:59.967 bw ( KiB/s): min= 4096, max= 4096, per=25.63%, avg=4096.00, stdev= 0.00, samples=1 00:30:59.967 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:59.967 lat (usec) : 250=68.30%, 500=30.96% 00:30:59.967 lat (msec) : 50=0.74% 00:30:59.967 cpu : usr=0.80%, sys=1.90%, ctx=1883, majf=0, minf=1 00:30:59.967 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:59.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:59.967 issued rwts: total=859,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:59.967 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:59.967 00:30:59.967 Run status group 0 (all jobs): 00:30:59.967 READ: bw=9553KiB/s (9782kB/s), 87.4KiB/s-6025KiB/s (89.5kB/s-6170kB/s), io=9792KiB (10.0MB), run=1001-1025msec 00:30:59.967 WRITE: bw=15.6MiB/s (16.4MB/s), 2014KiB/s-7992KiB/s (2062kB/s-8184kB/s), io=16.0MiB (16.8MB), run=1001-1025msec 00:30:59.967 00:30:59.967 Disk stats (read/write): 00:30:59.967 nvme0n1: ios=69/512, merge=0/0, ticks=758/95, in_queue=853, util=86.67% 00:30:59.967 nvme0n2: ios=1563/2048, merge=0/0, ticks=1411/308, in_queue=1719, util=93.81% 00:30:59.967 nvme0n3: ios=58/512, merge=0/0, ticks=1823/94, in_queue=1917, util=98.33% 00:30:59.967 nvme0n4: ios=569/640, merge=0/0, ticks=766/121, in_queue=887, util=94.96% 00:30:59.967 10:59:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:59.967 [global] 00:30:59.967 thread=1 00:30:59.967 invalidate=1 00:30:59.967 rw=write 00:30:59.967 time_based=1 00:30:59.967 runtime=1 00:30:59.967 ioengine=libaio 00:30:59.967 direct=1 00:30:59.967 bs=4096 00:30:59.967 iodepth=128 00:30:59.967 norandommap=0 00:30:59.967 numjobs=1 00:30:59.967 00:30:59.967 verify_dump=1 00:30:59.967 verify_backlog=512 00:30:59.967 verify_state_save=0 00:30:59.967 do_verify=1 00:30:59.967 verify=crc32c-intel 00:30:59.967 [job0] 00:30:59.967 filename=/dev/nvme0n1 00:30:59.967 [job1] 00:30:59.967 filename=/dev/nvme0n2 00:30:59.967 [job2] 00:30:59.967 filename=/dev/nvme0n3 00:30:59.967 [job3] 00:30:59.967 filename=/dev/nvme0n4 00:30:59.967 Could not set queue depth (nvme0n1) 00:30:59.967 Could not set queue depth (nvme0n2) 00:30:59.967 Could not set queue depth (nvme0n3) 00:30:59.967 Could not set queue depth (nvme0n4) 00:31:00.225 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.225 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.225 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.226 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:00.226 fio-3.35 00:31:00.226 Starting 4 threads 00:31:01.621 00:31:01.621 job0: (groupid=0, jobs=1): err= 0: pid=2910692: Thu Nov 7 10:59:28 2024 00:31:01.621 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:31:01.621 slat (nsec): min=1834, max=13929k, avg=161000.36, stdev=984014.21 00:31:01.621 clat (usec): min=10383, max=37288, avg=20653.91, stdev=5067.24 00:31:01.621 lat (usec): min=10390, max=40964, avg=20814.91, stdev=5122.99 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[10683], 5.00th=[13042], 10.00th=[13960], 20.00th=[16188], 00:31:01.621 | 30.00th=[17695], 40.00th=[18482], 50.00th=[20317], 60.00th=[22152], 00:31:01.621 | 70.00th=[23200], 80.00th=[25035], 90.00th=[27657], 95.00th=[28967], 00:31:01.621 | 99.00th=[34866], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:31:01.621 | 99.99th=[37487] 00:31:01.621 write: IOPS=2789, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1005msec); 0 zone resets 00:31:01.621 slat (usec): min=2, max=11084, avg=203.23, stdev=1054.77 00:31:01.621 clat (usec): min=723, max=57993, avg=26335.66, stdev=13600.14 00:31:01.621 lat (usec): min=8158, max=58007, avg=26538.89, stdev=13700.78 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[10421], 5.00th=[13698], 10.00th=[14222], 20.00th=[16319], 00:31:01.621 | 30.00th=[17433], 40.00th=[19006], 50.00th=[20841], 60.00th=[22414], 00:31:01.621 | 70.00th=[25297], 80.00th=[44303], 90.00th=[51119], 95.00th=[53216], 00:31:01.621 | 99.00th=[55837], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:31:01.621 | 99.99th=[57934] 00:31:01.621 bw ( KiB/s): min= 9112, max=12288, per=15.12%, avg=10700.00, stdev=2245.77, samples=2 00:31:01.621 iops : min= 2278, max= 3072, avg=2675.00, stdev=561.44, samples=2 00:31:01.621 lat (usec) : 750=0.02% 00:31:01.621 lat (msec) : 10=0.15%, 20=46.84%, 50=46.60%, 100=6.40% 00:31:01.621 cpu : usr=3.29%, sys=3.49%, ctx=243, majf=0, minf=1 00:31:01.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:01.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.621 issued rwts: total=2560,2803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.621 job1: (groupid=0, jobs=1): err= 0: pid=2910706: Thu Nov 7 10:59:28 2024 00:31:01.621 read: IOPS=5489, BW=21.4MiB/s (22.5MB/s)(21.5MiB/1002msec) 00:31:01.621 slat (nsec): min=982, max=11373k, avg=90922.63, stdev=604981.48 00:31:01.621 clat (usec): min=695, max=34322, avg=11638.20, stdev=4868.03 00:31:01.621 lat (usec): min=4525, max=34330, avg=11729.12, stdev=4892.58 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[ 4948], 5.00th=[ 7242], 10.00th=[ 7701], 20.00th=[ 8717], 00:31:01.621 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:31:01.621 | 70.00th=[11600], 80.00th=[12649], 90.00th=[17171], 95.00th=[23987], 00:31:01.621 | 99.00th=[30802], 99.50th=[31327], 99.90th=[34341], 99.95th=[34341], 00:31:01.621 | 99.99th=[34341] 00:31:01.621 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:31:01.621 slat (nsec): min=1799, max=20754k, avg=84073.99, stdev=590627.28 00:31:01.621 clat (usec): min=4871, max=46004, avg=11165.52, stdev=4345.62 00:31:01.621 lat (usec): min=4874, max=46035, avg=11249.59, stdev=4397.56 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[ 6718], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9634], 00:31:01.621 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:31:01.621 | 70.00th=[10421], 80.00th=[10683], 90.00th=[13304], 95.00th=[21365], 00:31:01.621 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[38011], 00:31:01.621 | 99.99th=[45876] 00:31:01.621 bw ( KiB/s): min=20480, max=20480, per=28.94%, avg=20480.00, stdev= 0.00, samples=1 00:31:01.621 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:31:01.621 lat (usec) : 750=0.01% 00:31:01.621 lat (msec) : 10=39.73%, 20=53.91%, 50=6.35% 00:31:01.621 cpu : usr=4.30%, sys=4.70%, ctx=523, majf=0, minf=1 00:31:01.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:01.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.621 issued rwts: total=5500,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.621 job2: (groupid=0, jobs=1): err= 0: pid=2910723: Thu Nov 7 10:59:28 2024 00:31:01.621 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:31:01.621 slat (nsec): min=1131, max=19265k, avg=82336.58, stdev=577709.96 00:31:01.621 clat (usec): min=1078, max=30392, avg=11073.75, stdev=3812.43 00:31:01.621 lat (usec): min=1084, max=30414, avg=11156.09, stdev=3837.01 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[ 2212], 5.00th=[ 4686], 10.00th=[ 7570], 20.00th=[ 8291], 00:31:01.621 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11207], 60.00th=[11600], 00:31:01.621 | 70.00th=[11863], 80.00th=[12780], 90.00th=[15008], 95.00th=[17695], 00:31:01.621 | 99.00th=[26084], 99.50th=[26084], 99.90th=[28181], 99.95th=[28181], 00:31:01.621 | 99.99th=[30278] 00:31:01.621 write: IOPS=5743, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec); 0 zone resets 00:31:01.621 slat (nsec): min=1900, max=10300k, avg=83890.77, stdev=516334.02 00:31:01.621 clat (usec): min=329, max=20077, avg=11172.00, stdev=2140.07 00:31:01.621 lat (usec): min=439, max=20856, avg=11255.89, stdev=2163.84 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[ 2671], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[10421], 00:31:01.621 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:31:01.621 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13173], 95.00th=[14091], 00:31:01.621 | 99.00th=[16581], 99.50th=[16581], 99.90th=[17171], 99.95th=[19006], 00:31:01.621 | 99.99th=[20055] 00:31:01.621 bw ( KiB/s): min=20880, max=24232, per=31.87%, avg=22556.00, stdev=2370.22, samples=2 00:31:01.621 iops : min= 5220, max= 6058, avg=5639.00, stdev=592.56, samples=2 00:31:01.621 lat (usec) : 500=0.01%, 750=0.03% 00:31:01.621 lat (msec) : 2=0.77%, 4=1.83%, 10=23.33%, 20=72.78%, 50=1.25% 00:31:01.621 cpu : usr=2.99%, sys=5.09%, ctx=494, majf=0, minf=1 00:31:01.621 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:01.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.621 issued rwts: total=5632,5761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.621 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.621 job3: (groupid=0, jobs=1): err= 0: pid=2910728: Thu Nov 7 10:59:28 2024 00:31:01.621 read: IOPS=3331, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1005msec) 00:31:01.621 slat (nsec): min=1635, max=11041k, avg=137177.88, stdev=899505.31 00:31:01.621 clat (usec): min=1886, max=38086, avg=17426.41, stdev=4874.02 00:31:01.621 lat (usec): min=8710, max=38089, avg=17563.59, stdev=4917.55 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[ 9241], 5.00th=[10945], 10.00th=[12911], 20.00th=[13566], 00:31:01.621 | 30.00th=[13960], 40.00th=[14615], 50.00th=[16188], 60.00th=[17433], 00:31:01.621 | 70.00th=[19530], 80.00th=[21365], 90.00th=[25560], 95.00th=[26870], 00:31:01.621 | 99.00th=[29754], 99.50th=[31851], 99.90th=[31851], 99.95th=[36963], 00:31:01.621 | 99.99th=[38011] 00:31:01.621 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:31:01.621 slat (usec): min=2, max=15604, avg=143.78, stdev=926.91 00:31:01.621 clat (usec): min=6258, max=44359, avg=19208.95, stdev=6160.11 00:31:01.621 lat (usec): min=6273, max=44367, avg=19352.73, stdev=6252.79 00:31:01.621 clat percentiles (usec): 00:31:01.621 | 1.00th=[10683], 5.00th=[13304], 10.00th=[13435], 20.00th=[13960], 00:31:01.621 | 30.00th=[14877], 40.00th=[17171], 50.00th=[17957], 60.00th=[19268], 00:31:01.621 | 70.00th=[20579], 80.00th=[22414], 90.00th=[26346], 95.00th=[32900], 00:31:01.621 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:01.621 | 99.99th=[44303] 00:31:01.621 bw ( KiB/s): min=13136, max=15536, per=20.26%, avg=14336.00, stdev=1697.06, samples=2 00:31:01.621 iops : min= 3284, max= 3884, avg=3584.00, stdev=424.26, samples=2 00:31:01.621 lat (msec) : 2=0.01%, 10=1.40%, 20=67.25%, 50=31.33% 00:31:01.621 cpu : usr=2.19%, sys=6.18%, ctx=219, majf=0, minf=2 00:31:01.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:01.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:01.622 issued rwts: total=3348,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:01.622 00:31:01.622 Run status group 0 (all jobs): 00:31:01.622 READ: bw=66.2MiB/s (69.4MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=66.6MiB (69.8MB), run=1002-1005msec 00:31:01.622 WRITE: bw=69.1MiB/s (72.5MB/s), 10.9MiB/s-22.4MiB/s (11.4MB/s-23.5MB/s), io=69.5MiB (72.8MB), run=1002-1005msec 00:31:01.622 00:31:01.622 Disk stats (read/write): 00:31:01.622 nvme0n1: ios=2097/2479, merge=0/0, ticks=21019/31500, in_queue=52519, util=85.57% 00:31:01.622 nvme0n2: ios=4658/4711, merge=0/0, ticks=27122/24780, in_queue=51902, util=90.96% 00:31:01.622 nvme0n3: ios=4665/5111, merge=0/0, ticks=28647/23860, in_queue=52507, util=93.34% 00:31:01.622 nvme0n4: ios=2876/3072, merge=0/0, ticks=24621/26974, in_queue=51595, util=95.38% 00:31:01.622 10:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:01.622 [global] 00:31:01.622 thread=1 00:31:01.622 invalidate=1 00:31:01.622 rw=randwrite 00:31:01.622 time_based=1 00:31:01.622 runtime=1 00:31:01.622 ioengine=libaio 00:31:01.622 direct=1 00:31:01.622 bs=4096 00:31:01.622 iodepth=128 00:31:01.622 norandommap=0 00:31:01.622 numjobs=1 00:31:01.622 00:31:01.622 verify_dump=1 00:31:01.622 verify_backlog=512 00:31:01.622 verify_state_save=0 00:31:01.622 do_verify=1 00:31:01.622 verify=crc32c-intel 00:31:01.622 [job0] 00:31:01.622 filename=/dev/nvme0n1 00:31:01.622 [job1] 00:31:01.622 filename=/dev/nvme0n2 00:31:01.622 [job2] 00:31:01.622 filename=/dev/nvme0n3 00:31:01.622 [job3] 00:31:01.622 filename=/dev/nvme0n4 00:31:01.622 Could not set queue depth (nvme0n1) 00:31:01.622 Could not set queue depth (nvme0n2) 00:31:01.622 Could not set queue depth (nvme0n3) 00:31:01.622 Could not set queue depth (nvme0n4) 00:31:01.884 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.884 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.884 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.884 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:01.884 fio-3.35 00:31:01.884 Starting 4 threads 00:31:03.256 00:31:03.256 job0: (groupid=0, jobs=1): err= 0: pid=2911107: Thu Nov 7 10:59:30 2024 00:31:03.256 read: IOPS=5328, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1003msec) 00:31:03.256 slat (nsec): min=1278, max=14086k, avg=93337.01, stdev=689735.14 00:31:03.256 clat (usec): min=920, max=49415, avg=12133.64, stdev=5353.35 00:31:03.256 lat (usec): min=3329, max=49449, avg=12226.98, stdev=5397.69 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 4293], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[ 9503], 00:31:03.256 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:31:03.256 | 70.00th=[11863], 80.00th=[13435], 90.00th=[16909], 95.00th=[20841], 00:31:03.256 | 99.00th=[38536], 99.50th=[38536], 99.90th=[43779], 99.95th=[46400], 00:31:03.256 | 99.99th=[49546] 00:31:03.256 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:31:03.256 slat (nsec): min=1807, max=14818k, avg=81811.69, stdev=551055.06 00:31:03.256 clat (usec): min=814, max=60116, avg=11072.07, stdev=4967.11 00:31:03.256 lat (usec): min=827, max=60119, avg=11153.88, stdev=4998.00 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 3752], 5.00th=[ 6259], 10.00th=[ 7767], 20.00th=[ 9110], 00:31:03.256 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:31:03.256 | 70.00th=[11469], 80.00th=[11994], 90.00th=[14222], 95.00th=[17695], 00:31:03.256 | 99.00th=[32375], 99.50th=[44303], 99.90th=[54264], 99.95th=[60031], 00:31:03.256 | 99.99th=[60031] 00:31:03.256 bw ( KiB/s): min=20480, max=24576, per=29.79%, avg=22528.00, stdev=2896.31, samples=2 00:31:03.256 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:31:03.256 lat (usec) : 1000=0.06% 00:31:03.256 lat (msec) : 2=0.02%, 4=0.95%, 10=34.18%, 20=59.68%, 50=4.91% 00:31:03.256 lat (msec) : 100=0.19% 00:31:03.256 cpu : usr=4.09%, sys=6.49%, ctx=509, majf=0, minf=1 00:31:03.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:03.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.256 issued rwts: total=5344,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.256 job1: (groupid=0, jobs=1): err= 0: pid=2911116: Thu Nov 7 10:59:30 2024 00:31:03.256 read: IOPS=3915, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1008msec) 00:31:03.256 slat (nsec): min=1460, max=25732k, avg=120120.07, stdev=1136712.82 00:31:03.256 clat (usec): min=894, max=47415, avg=14854.80, stdev=6879.12 00:31:03.256 lat (usec): min=2701, max=47441, avg=14974.92, stdev=6966.24 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 4228], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10028], 00:31:03.256 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12780], 00:31:03.256 | 70.00th=[15664], 80.00th=[20841], 90.00th=[26608], 95.00th=[28967], 00:31:03.256 | 99.00th=[34341], 99.50th=[35914], 99.90th=[36963], 99.95th=[44303], 00:31:03.256 | 99.99th=[47449] 00:31:03.256 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:31:03.256 slat (usec): min=2, max=17860, avg=121.48, stdev=940.73 00:31:03.256 clat (usec): min=1524, max=83416, avg=16901.53, stdev=14230.70 00:31:03.256 lat (usec): min=1545, max=83426, avg=17023.01, stdev=14314.91 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 4555], 5.00th=[ 6849], 10.00th=[ 7898], 20.00th=[ 8979], 00:31:03.256 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11994], 60.00th=[13698], 00:31:03.256 | 70.00th=[16188], 80.00th=[20579], 90.00th=[29492], 95.00th=[54789], 00:31:03.256 | 99.00th=[80217], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:31:03.256 | 99.99th=[83362] 00:31:03.256 bw ( KiB/s): min=15152, max=17616, per=21.67%, avg=16384.00, stdev=1742.31, samples=2 00:31:03.256 iops : min= 3788, max= 4404, avg=4096.00, stdev=435.58, samples=2 00:31:03.256 lat (usec) : 1000=0.01% 00:31:03.256 lat (msec) : 2=0.02%, 4=0.61%, 10=24.98%, 20=52.51%, 50=19.02% 00:31:03.256 lat (msec) : 100=2.85% 00:31:03.256 cpu : usr=3.87%, sys=3.67%, ctx=251, majf=0, minf=1 00:31:03.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:03.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.256 issued rwts: total=3947,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.256 job2: (groupid=0, jobs=1): err= 0: pid=2911133: Thu Nov 7 10:59:30 2024 00:31:03.256 read: IOPS=4814, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1009msec) 00:31:03.256 slat (nsec): min=1496, max=25450k, avg=110360.51, stdev=910130.19 00:31:03.256 clat (usec): min=1105, max=56547, avg=14089.66, stdev=6866.29 00:31:03.256 lat (usec): min=3751, max=56563, avg=14200.02, stdev=6920.09 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 6783], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10552], 00:31:03.256 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:31:03.256 | 70.00th=[13435], 80.00th=[15926], 90.00th=[20317], 95.00th=[28705], 00:31:03.256 | 99.00th=[47973], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:31:03.256 | 99.99th=[56361] 00:31:03.256 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:31:03.256 slat (usec): min=2, max=9723, avg=85.09, stdev=502.92 00:31:03.256 clat (usec): min=1528, max=26149, avg=11588.45, stdev=2365.04 00:31:03.256 lat (usec): min=1541, max=26155, avg=11673.55, stdev=2400.78 00:31:03.256 clat percentiles (usec): 00:31:03.256 | 1.00th=[ 4228], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[10290], 00:31:03.256 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:31:03.256 | 70.00th=[12649], 80.00th=[13304], 90.00th=[13698], 95.00th=[15139], 00:31:03.256 | 99.00th=[17433], 99.50th=[18482], 99.90th=[20579], 99.95th=[21365], 00:31:03.256 | 99.99th=[26084] 00:31:03.256 bw ( KiB/s): min=18952, max=22008, per=27.08%, avg=20480.00, stdev=2160.92, samples=2 00:31:03.256 iops : min= 4738, max= 5502, avg=5120.00, stdev=540.23, samples=2 00:31:03.256 lat (msec) : 2=0.08%, 4=0.45%, 10=13.13%, 20=81.08%, 50=4.94% 00:31:03.256 lat (msec) : 100=0.32% 00:31:03.256 cpu : usr=3.17%, sys=6.85%, ctx=482, majf=0, minf=1 00:31:03.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:03.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.256 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.257 issued rwts: total=4858,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.257 job3: (groupid=0, jobs=1): err= 0: pid=2911138: Thu Nov 7 10:59:30 2024 00:31:03.257 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:31:03.257 slat (nsec): min=1615, max=35385k, avg=99370.18, stdev=962227.31 00:31:03.257 clat (usec): min=2046, max=81576, avg=14803.05, stdev=8664.54 00:31:03.257 lat (usec): min=2052, max=90809, avg=14902.42, stdev=8751.00 00:31:03.257 clat percentiles (usec): 00:31:03.257 | 1.00th=[ 5211], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[10159], 00:31:03.257 | 30.00th=[11207], 40.00th=[12518], 50.00th=[13042], 60.00th=[13698], 00:31:03.257 | 70.00th=[14746], 80.00th=[16909], 90.00th=[20579], 95.00th=[29492], 00:31:03.257 | 99.00th=[55313], 99.50th=[65799], 99.90th=[81265], 99.95th=[81265], 00:31:03.257 | 99.99th=[81265] 00:31:03.257 write: IOPS=4204, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1010msec); 0 zone resets 00:31:03.257 slat (usec): min=2, max=20408, avg=112.41, stdev=1031.94 00:31:03.257 clat (usec): min=345, max=81597, avg=15709.89, stdev=11343.09 00:31:03.257 lat (usec): min=374, max=81606, avg=15822.30, stdev=11413.83 00:31:03.257 clat percentiles (usec): 00:31:03.257 | 1.00th=[ 1893], 5.00th=[ 4293], 10.00th=[ 6456], 20.00th=[ 7832], 00:31:03.257 | 30.00th=[10290], 40.00th=[12125], 50.00th=[13173], 60.00th=[16319], 00:31:03.257 | 70.00th=[17433], 80.00th=[20841], 90.00th=[24773], 95.00th=[32113], 00:31:03.257 | 99.00th=[77071], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:31:03.257 | 99.99th=[81265] 00:31:03.257 bw ( KiB/s): min=16384, max=16568, per=21.79%, avg=16476.00, stdev=130.11, samples=2 00:31:03.257 iops : min= 4096, max= 4142, avg=4119.00, stdev=32.53, samples=2 00:31:03.257 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.30% 00:31:03.257 lat (msec) : 2=0.50%, 4=1.81%, 10=21.29%, 20=58.55%, 50=15.70% 00:31:03.257 lat (msec) : 100=1.77% 00:31:03.257 cpu : usr=3.57%, sys=4.66%, ctx=218, majf=0, minf=1 00:31:03.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:03.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:03.257 issued rwts: total=4096,4247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:03.257 00:31:03.257 Run status group 0 (all jobs): 00:31:03.257 READ: bw=70.6MiB/s (74.0MB/s), 15.3MiB/s-20.8MiB/s (16.0MB/s-21.8MB/s), io=71.3MiB (74.7MB), run=1003-1010msec 00:31:03.257 WRITE: bw=73.9MiB/s (77.4MB/s), 15.9MiB/s-21.9MiB/s (16.6MB/s-23.0MB/s), io=74.6MiB (78.2MB), run=1003-1010msec 00:31:03.257 00:31:03.257 Disk stats (read/write): 00:31:03.257 nvme0n1: ios=4650/4695, merge=0/0, ticks=42770/37015, in_queue=79785, util=94.59% 00:31:03.257 nvme0n2: ios=3072/3455, merge=0/0, ticks=44181/59870, in_queue=104051, util=86.69% 00:31:03.257 nvme0n3: ios=4153/4119, merge=0/0, ticks=41293/33730, in_queue=75023, util=97.91% 00:31:03.257 nvme0n4: ios=3584/4039, merge=0/0, ticks=40195/54338, in_queue=94533, util=89.70% 00:31:03.257 10:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:03.257 10:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2911240 00:31:03.257 10:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:03.257 10:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:03.257 [global] 00:31:03.257 thread=1 00:31:03.257 invalidate=1 00:31:03.257 rw=read 00:31:03.257 time_based=1 00:31:03.257 runtime=10 00:31:03.257 ioengine=libaio 00:31:03.257 direct=1 00:31:03.257 bs=4096 00:31:03.257 iodepth=1 00:31:03.257 norandommap=1 00:31:03.257 numjobs=1 00:31:03.257 00:31:03.257 [job0] 00:31:03.257 filename=/dev/nvme0n1 00:31:03.257 [job1] 00:31:03.257 filename=/dev/nvme0n2 00:31:03.257 [job2] 00:31:03.257 filename=/dev/nvme0n3 00:31:03.257 [job3] 00:31:03.257 filename=/dev/nvme0n4 00:31:03.257 Could not set queue depth (nvme0n1) 00:31:03.257 Could not set queue depth (nvme0n2) 00:31:03.257 Could not set queue depth (nvme0n3) 00:31:03.257 Could not set queue depth (nvme0n4) 00:31:03.257 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.257 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.257 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.257 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.257 fio-3.35 00:31:03.257 Starting 4 threads 00:31:06.532 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:06.532 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=16666624, buflen=4096 00:31:06.532 fio: pid=2911569, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:06.532 10:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:06.532 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=36188160, buflen=4096 00:31:06.532 fio: pid=2911563, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:06.532 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:06.532 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:06.789 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9097216, buflen=4096 00:31:06.789 fio: pid=2911534, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:06.789 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:06.789 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:07.046 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48005120, buflen=4096 00:31:07.046 fio: pid=2911547, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:07.046 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.046 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:07.046 00:31:07.047 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2911534: Thu Nov 7 10:59:34 2024 00:31:07.047 read: IOPS=700, BW=2803KiB/s (2870kB/s)(8884KiB/3170msec) 00:31:07.047 slat (usec): min=6, max=31286, avg=34.77, stdev=765.28 00:31:07.047 clat (usec): min=180, max=42166, avg=1380.46, stdev=6672.74 00:31:07.047 lat (usec): min=189, max=42176, avg=1409.37, stdev=6710.80 00:31:07.047 clat percentiles (usec): 00:31:07.047 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 217], 00:31:07.047 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 239], 60.00th=[ 247], 00:31:07.047 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 322], 95.00th=[ 429], 00:31:07.047 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:07.047 | 99.99th=[42206] 00:31:07.047 bw ( KiB/s): min= 96, max=11720, per=7.59%, avg=2397.33, stdev=4648.46, samples=6 00:31:07.047 iops : min= 24, max= 2930, avg=599.33, stdev=1162.12, samples=6 00:31:07.047 lat (usec) : 250=62.78%, 500=34.16%, 750=0.09% 00:31:07.047 lat (msec) : 2=0.14%, 50=2.79% 00:31:07.047 cpu : usr=0.13%, sys=1.20%, ctx=2228, majf=0, minf=1 00:31:07.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 issued rwts: total=2222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:07.047 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2911547: Thu Nov 7 10:59:34 2024 00:31:07.047 read: IOPS=3445, BW=13.5MiB/s (14.1MB/s)(45.8MiB/3402msec) 00:31:07.047 slat (usec): min=5, max=15498, avg=11.95, stdev=222.20 00:31:07.047 clat (usec): min=166, max=42114, avg=275.21, stdev=1204.71 00:31:07.047 lat (usec): min=174, max=42137, avg=287.16, stdev=1225.86 00:31:07.047 clat percentiles (usec): 00:31:07.047 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 202], 20.00th=[ 217], 00:31:07.047 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:31:07.047 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 297], 00:31:07.047 | 99.00th=[ 347], 99.50th=[ 392], 99.90th=[ 7242], 99.95th=[41157], 00:31:07.047 | 99.99th=[42206] 00:31:07.047 bw ( KiB/s): min= 5296, max=16576, per=42.88%, avg=13533.50, stdev=4261.91, samples=6 00:31:07.047 iops : min= 1324, max= 4144, avg=3383.33, stdev=1065.48, samples=6 00:31:07.047 lat (usec) : 250=69.03%, 500=30.75%, 750=0.08%, 1000=0.01% 00:31:07.047 lat (msec) : 2=0.02%, 4=0.01%, 10=0.02%, 50=0.09% 00:31:07.047 cpu : usr=0.91%, sys=3.82%, ctx=11726, majf=0, minf=1 00:31:07.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 issued rwts: total=11721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:07.047 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2911563: Thu Nov 7 10:59:34 2024 00:31:07.047 read: IOPS=2987, BW=11.7MiB/s (12.2MB/s)(34.5MiB/2958msec) 00:31:07.047 slat (nsec): min=6094, max=31852, avg=7860.93, stdev=1217.48 00:31:07.047 clat (usec): min=170, max=42071, avg=323.21, stdev=1798.59 00:31:07.047 lat (usec): min=177, max=42095, avg=331.06, stdev=1799.16 00:31:07.047 clat percentiles (usec): 00:31:07.047 | 1.00th=[ 194], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 225], 00:31:07.047 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:31:07.047 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 310], 00:31:07.047 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[41157], 99.95th=[41681], 00:31:07.047 | 99.99th=[42206] 00:31:07.047 bw ( KiB/s): min= 5160, max=16632, per=42.71%, avg=13480.00, stdev=4768.16, samples=5 00:31:07.047 iops : min= 1290, max= 4158, avg=3370.00, stdev=1192.04, samples=5 00:31:07.047 lat (usec) : 250=74.23%, 500=25.41%, 750=0.14% 00:31:07.047 lat (msec) : 2=0.01%, 4=0.01%, 50=0.19% 00:31:07.047 cpu : usr=0.68%, sys=3.01%, ctx=8837, majf=0, minf=2 00:31:07.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 issued rwts: total=8836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:07.047 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2911569: Thu Nov 7 10:59:34 2024 00:31:07.047 read: IOPS=1486, BW=5944KiB/s (6087kB/s)(15.9MiB/2738msec) 00:31:07.047 slat (nsec): min=2930, max=33184, avg=8200.22, stdev=2150.32 00:31:07.047 clat (usec): min=172, max=41110, avg=657.44, stdev=4023.66 00:31:07.047 lat (usec): min=179, max=41131, avg=665.64, stdev=4025.13 00:31:07.047 clat percentiles (usec): 00:31:07.047 | 1.00th=[ 186], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 227], 00:31:07.047 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:31:07.047 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 383], 00:31:07.047 | 99.00th=[14877], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:07.047 | 99.99th=[41157] 00:31:07.047 bw ( KiB/s): min= 96, max=16288, per=19.46%, avg=6142.40, stdev=7612.12, samples=5 00:31:07.047 iops : min= 24, max= 4072, avg=1535.60, stdev=1903.03, samples=5 00:31:07.047 lat (usec) : 250=68.48%, 500=30.00%, 750=0.37% 00:31:07.047 lat (msec) : 2=0.07%, 4=0.05%, 20=0.02%, 50=0.98% 00:31:07.047 cpu : usr=0.47%, sys=1.50%, ctx=4073, majf=0, minf=2 00:31:07.047 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:07.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.047 issued rwts: total=4070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.047 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:07.047 00:31:07.047 Run status group 0 (all jobs): 00:31:07.047 READ: bw=30.8MiB/s (32.3MB/s), 2803KiB/s-13.5MiB/s (2870kB/s-14.1MB/s), io=105MiB (110MB), run=2738-3402msec 00:31:07.047 00:31:07.047 Disk stats (read/write): 00:31:07.047 nvme0n1: ios=2123/0, merge=0/0, ticks=3688/0, in_queue=3688, util=98.49% 00:31:07.047 nvme0n2: ios=11631/0, merge=0/0, ticks=4170/0, in_queue=4170, util=98.13% 00:31:07.047 nvme0n3: ios=8876/0, merge=0/0, ticks=3414/0, in_queue=3414, util=99.49% 00:31:07.047 nvme0n4: ios=4113/0, merge=0/0, ticks=3016/0, in_queue=3016, util=99.30% 00:31:07.047 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.047 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:07.304 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.304 10:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:07.561 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.561 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:07.819 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:07.819 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2911240 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:08.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:08.076 nvmf hotplug test: fio failed as expected 00:31:08.076 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:08.334 rmmod nvme_tcp 00:31:08.334 rmmod nvme_fabrics 00:31:08.334 rmmod nvme_keyring 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2908741 ']' 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2908741 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2908741 ']' 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2908741 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2908741 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2908741' 00:31:08.334 killing process with pid 2908741 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2908741 00:31:08.334 10:59:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2908741 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:08.592 10:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.122 00:31:11.122 real 0m25.255s 00:31:11.122 user 1m29.786s 00:31:11.122 sys 0m11.038s 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:11.122 ************************************ 00:31:11.122 END TEST nvmf_fio_target 00:31:11.122 ************************************ 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:11.122 ************************************ 00:31:11.122 START TEST nvmf_bdevio 00:31:11.122 ************************************ 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:11.122 * Looking for test storage... 00:31:11.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:11.122 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:11.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.123 --rc genhtml_branch_coverage=1 00:31:11.123 --rc genhtml_function_coverage=1 00:31:11.123 --rc genhtml_legend=1 00:31:11.123 --rc geninfo_all_blocks=1 00:31:11.123 --rc geninfo_unexecuted_blocks=1 00:31:11.123 00:31:11.123 ' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:11.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.123 --rc genhtml_branch_coverage=1 00:31:11.123 --rc genhtml_function_coverage=1 00:31:11.123 --rc genhtml_legend=1 00:31:11.123 --rc geninfo_all_blocks=1 00:31:11.123 --rc geninfo_unexecuted_blocks=1 00:31:11.123 00:31:11.123 ' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:11.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.123 --rc genhtml_branch_coverage=1 00:31:11.123 --rc genhtml_function_coverage=1 00:31:11.123 --rc genhtml_legend=1 00:31:11.123 --rc geninfo_all_blocks=1 00:31:11.123 --rc geninfo_unexecuted_blocks=1 00:31:11.123 00:31:11.123 ' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:11.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.123 --rc genhtml_branch_coverage=1 00:31:11.123 --rc genhtml_function_coverage=1 00:31:11.123 --rc genhtml_legend=1 00:31:11.123 --rc geninfo_all_blocks=1 00:31:11.123 --rc geninfo_unexecuted_blocks=1 00:31:11.123 00:31:11.123 ' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.123 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.124 10:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.387 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.388 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.388 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:31:16.388 00:31:16.388 --- 10.0.0.2 ping statistics --- 00:31:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.388 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:31:16.388 00:31:16.388 --- 10.0.0.1 ping statistics --- 00:31:16.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.388 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.388 10:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2915812 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2915812 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2915812 ']' 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:16.388 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.646 [2024-11-07 10:59:44.061616] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.647 [2024-11-07 10:59:44.062576] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:31:16.647 [2024-11-07 10:59:44.062612] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.647 [2024-11-07 10:59:44.129552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.647 [2024-11-07 10:59:44.171448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.647 [2024-11-07 10:59:44.171486] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.647 [2024-11-07 10:59:44.171494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.647 [2024-11-07 10:59:44.171500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.647 [2024-11-07 10:59:44.171506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.647 [2024-11-07 10:59:44.173027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.647 [2024-11-07 10:59:44.173134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:16.647 [2024-11-07 10:59:44.173241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.647 [2024-11-07 10:59:44.173243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:16.647 [2024-11-07 10:59:44.239493] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.647 [2024-11-07 10:59:44.239987] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.647 [2024-11-07 10:59:44.240297] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:16.647 [2024-11-07 10:59:44.240664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.647 [2024-11-07 10:59:44.240711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.647 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.647 [2024-11-07 10:59:44.305985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.905 Malloc0 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:16.905 [2024-11-07 10:59:44.370015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:16.905 { 00:31:16.905 "params": { 00:31:16.905 "name": "Nvme$subsystem", 00:31:16.905 "trtype": "$TEST_TRANSPORT", 00:31:16.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.905 "adrfam": "ipv4", 00:31:16.905 "trsvcid": "$NVMF_PORT", 00:31:16.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.905 "hdgst": ${hdgst:-false}, 00:31:16.905 "ddgst": ${ddgst:-false} 00:31:16.905 }, 00:31:16.905 "method": "bdev_nvme_attach_controller" 00:31:16.905 } 00:31:16.905 EOF 00:31:16.905 )") 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:16.905 10:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:16.905 "params": { 00:31:16.905 "name": "Nvme1", 00:31:16.905 "trtype": "tcp", 00:31:16.905 "traddr": "10.0.0.2", 00:31:16.905 "adrfam": "ipv4", 00:31:16.905 "trsvcid": "4420", 00:31:16.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.905 "hdgst": false, 00:31:16.905 "ddgst": false 00:31:16.905 }, 00:31:16.905 "method": "bdev_nvme_attach_controller" 00:31:16.905 }' 00:31:16.906 [2024-11-07 10:59:44.421032] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:31:16.906 [2024-11-07 10:59:44.421079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915838 ] 00:31:16.906 [2024-11-07 10:59:44.485663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:16.906 [2024-11-07 10:59:44.529418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.906 [2024-11-07 10:59:44.529522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.906 [2024-11-07 10:59:44.529525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.163 I/O targets: 00:31:17.163 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:17.163 00:31:17.163 00:31:17.163 CUnit - A unit testing framework for C - Version 2.1-3 00:31:17.163 http://cunit.sourceforge.net/ 00:31:17.163 00:31:17.163 00:31:17.163 Suite: bdevio tests on: Nvme1n1 00:31:17.420 Test: blockdev write read block ...passed 00:31:17.420 Test: blockdev write zeroes read block ...passed 00:31:17.420 Test: blockdev write zeroes read no split ...passed 00:31:17.421 Test: blockdev write zeroes read split ...passed 00:31:17.421 Test: blockdev write zeroes read split partial ...passed 00:31:17.421 Test: blockdev reset ...[2024-11-07 10:59:44.944638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:17.421 [2024-11-07 10:59:44.944704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199f350 (9): Bad file descriptor 00:31:17.421 [2024-11-07 10:59:45.037353] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:17.421 passed 00:31:17.421 Test: blockdev write read 8 blocks ...passed 00:31:17.421 Test: blockdev write read size > 128k ...passed 00:31:17.421 Test: blockdev write read invalid size ...passed 00:31:17.678 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:17.678 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:17.678 Test: blockdev write read max offset ...passed 00:31:17.678 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:17.678 Test: blockdev writev readv 8 blocks ...passed 00:31:17.678 Test: blockdev writev readv 30 x 1block ...passed 00:31:17.678 Test: blockdev writev readv block ...passed 00:31:17.678 Test: blockdev writev readv size > 128k ...passed 00:31:17.678 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:17.678 Test: blockdev comparev and writev ...[2024-11-07 10:59:45.247422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.247457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.247471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.247480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.247773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.247785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.247797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.247805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.248103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.248115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.248126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.248421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.248437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.248449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:17.678 [2024-11-07 10:59:45.248457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:17.678 passed 00:31:17.678 Test: blockdev nvme passthru rw ...passed 00:31:17.678 Test: blockdev nvme passthru vendor specific ...[2024-11-07 10:59:45.330895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.678 [2024-11-07 10:59:45.330913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.331031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.678 [2024-11-07 10:59:45.331042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.331152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.678 [2024-11-07 10:59:45.331162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:17.678 [2024-11-07 10:59:45.331272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:17.678 [2024-11-07 10:59:45.331283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:17.678 passed 00:31:17.936 Test: blockdev nvme admin passthru ...passed 00:31:17.936 Test: blockdev copy ...passed 00:31:17.936 00:31:17.936 Run Summary: Type Total Ran Passed Failed Inactive 00:31:17.936 suites 1 1 n/a 0 0 00:31:17.936 tests 23 23 23 0 0 00:31:17.936 asserts 152 152 152 0 n/a 00:31:17.936 00:31:17.936 Elapsed time = 1.175 seconds 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.936 rmmod nvme_tcp 00:31:17.936 rmmod nvme_fabrics 00:31:17.936 rmmod nvme_keyring 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2915812 ']' 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2915812 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2915812 ']' 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2915812 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:17.936 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2915812 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2915812' 00:31:18.195 killing process with pid 2915812 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2915812 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2915812 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.195 10:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:20.728 00:31:20.728 real 0m9.619s 00:31:20.728 user 0m9.252s 00:31:20.728 sys 0m4.872s 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:20.728 ************************************ 00:31:20.728 END TEST nvmf_bdevio 00:31:20.728 ************************************ 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:20.728 00:31:20.728 real 4m23.837s 00:31:20.728 user 9m2.760s 00:31:20.728 sys 1m45.419s 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:20.728 10:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.728 ************************************ 00:31:20.728 END TEST nvmf_target_core_interrupt_mode 00:31:20.728 ************************************ 00:31:20.728 10:59:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:20.728 10:59:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:20.728 10:59:47 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:20.728 10:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:20.728 ************************************ 00:31:20.728 START TEST nvmf_interrupt 00:31:20.728 ************************************ 00:31:20.728 10:59:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:20.728 * Looking for test storage... 00:31:20.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.728 --rc genhtml_branch_coverage=1 00:31:20.728 --rc genhtml_function_coverage=1 00:31:20.728 --rc genhtml_legend=1 00:31:20.728 --rc geninfo_all_blocks=1 00:31:20.728 --rc geninfo_unexecuted_blocks=1 00:31:20.728 00:31:20.728 ' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.728 --rc genhtml_branch_coverage=1 00:31:20.728 --rc genhtml_function_coverage=1 00:31:20.728 --rc genhtml_legend=1 00:31:20.728 --rc geninfo_all_blocks=1 00:31:20.728 --rc geninfo_unexecuted_blocks=1 00:31:20.728 00:31:20.728 ' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.728 --rc genhtml_branch_coverage=1 00:31:20.728 --rc genhtml_function_coverage=1 00:31:20.728 --rc genhtml_legend=1 00:31:20.728 --rc geninfo_all_blocks=1 00:31:20.728 --rc geninfo_unexecuted_blocks=1 00:31:20.728 00:31:20.728 ' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.728 --rc genhtml_branch_coverage=1 00:31:20.728 --rc genhtml_function_coverage=1 00:31:20.728 --rc genhtml_legend=1 00:31:20.728 --rc geninfo_all_blocks=1 00:31:20.728 --rc geninfo_unexecuted_blocks=1 00:31:20.728 00:31:20.728 ' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.728 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.729 10:59:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.989 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:25.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:25.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:25.990 Found net devices under 0000:86:00.0: cvl_0_0 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:25.990 Found net devices under 0000:86:00.1: cvl_0_1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:31:25.990 00:31:25.990 --- 10.0.0.2 ping statistics --- 00:31:25.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.990 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:31:25.990 00:31:25.990 --- 10.0.0.1 ping statistics --- 00:31:25.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.990 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2919381 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2919381 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2919381 ']' 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:25.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:25.990 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:25.990 [2024-11-07 10:59:53.647998] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:25.990 [2024-11-07 10:59:53.648973] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:31:25.990 [2024-11-07 10:59:53.649011] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.248 [2024-11-07 10:59:53.717227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:26.248 [2024-11-07 10:59:53.758584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.248 [2024-11-07 10:59:53.758622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.248 [2024-11-07 10:59:53.758630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.248 [2024-11-07 10:59:53.758636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.248 [2024-11-07 10:59:53.758641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.248 [2024-11-07 10:59:53.759825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.248 [2024-11-07 10:59:53.759829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.248 [2024-11-07 10:59:53.826669] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:26.248 [2024-11-07 10:59:53.826909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.248 [2024-11-07 10:59:53.826959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:31:26.248 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:31:26.248 5000+0 records in 00:31:26.248 5000+0 records out 00:31:26.248 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171645 s, 597 MB/s 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.506 AIO0 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.506 [2024-11-07 10:59:53.960314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:26.506 [2024-11-07 10:59:53.988572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2919381 0 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 0 idle 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:26.506 10:59:53 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:26.506 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919381 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0' 00:31:26.506 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919381 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.23 reactor_0 00:31:26.506 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:26.506 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2919381 1 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 1 idle 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919404 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919404 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2919640 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2919381 0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2919381 0 busy 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:26.764 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919381 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.23 reactor_0' 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919381 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.23 reactor_0 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:27.022 10:59:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:31:27.954 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:31:27.954 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:27.954 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:27.954 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919381 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.58 reactor_0' 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919381 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.58 reactor_0 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2919381 1 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2919381 1 busy 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:28.212 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919404 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.35 reactor_1' 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919404 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.35 reactor_1 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:28.470 10:59:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2919640 00:31:38.429 Initializing NVMe Controllers 00:31:38.429 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.429 Controller IO queue size 256, less than required. 00:31:38.429 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:38.429 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:38.429 Initialization complete. Launching workers. 00:31:38.429 ======================================================== 00:31:38.429 Latency(us) 00:31:38.429 Device Information : IOPS MiB/s Average min max 00:31:38.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16262.10 63.52 15750.59 2777.61 19837.07 00:31:38.429 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16110.70 62.93 15897.08 4319.90 21078.61 00:31:38.429 ======================================================== 00:31:38.429 Total : 32372.80 126.46 15823.50 2777.61 21078.61 00:31:38.429 00:31:38.429 [2024-11-07 11:00:04.585774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3b00 is same with the state(6) to be set 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2919381 0 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 0 idle 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919381 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0' 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919381 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.21 reactor_0 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:38.429 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2919381 1 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 1 idle 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919404 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919404 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:38.430 11:00:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:38.430 11:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:38.430 11:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:31:38.430 11:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:38.430 11:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:38.430 11:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2919381 0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 0 idle 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919381 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.36 reactor_0' 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919381 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.36 reactor_0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2919381 1 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2919381 1 idle 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2919381 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2919381 -w 256 00:31:39.803 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2919404 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.05 reactor_1' 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2919404 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.05 reactor_1 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:40.061 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:40.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:40.322 rmmod nvme_tcp 00:31:40.322 rmmod nvme_fabrics 00:31:40.322 rmmod nvme_keyring 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:40.322 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2919381 ']' 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2919381 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2919381 ']' 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2919381 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:40.323 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2919381 00:31:40.637 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:40.637 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:40.637 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2919381' 00:31:40.637 killing process with pid 2919381 00:31:40.637 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2919381 00:31:40.637 11:00:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2919381 00:31:40.637 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.637 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.637 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:40.638 11:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.589 11:00:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.846 00:31:42.846 real 0m22.261s 00:31:42.846 user 0m39.400s 00:31:42.846 sys 0m8.055s 00:31:42.846 11:00:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:42.846 11:00:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:42.846 ************************************ 00:31:42.846 END TEST nvmf_interrupt 00:31:42.847 ************************************ 00:31:42.847 00:31:42.847 real 26m37.117s 00:31:42.847 user 55m42.929s 00:31:42.847 sys 8m55.145s 00:31:42.847 11:00:10 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:42.847 11:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:42.847 ************************************ 00:31:42.847 END TEST nvmf_tcp 00:31:42.847 ************************************ 00:31:42.847 11:00:10 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:31:42.847 11:00:10 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:42.847 11:00:10 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:42.847 11:00:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:42.847 11:00:10 -- common/autotest_common.sh@10 -- # set +x 00:31:42.847 ************************************ 00:31:42.847 START TEST spdkcli_nvmf_tcp 00:31:42.847 ************************************ 00:31:42.847 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:42.847 * Looking for test storage... 00:31:42.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:42.847 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:42.847 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:31:42.847 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:43.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.105 --rc genhtml_branch_coverage=1 00:31:43.105 --rc genhtml_function_coverage=1 00:31:43.105 --rc genhtml_legend=1 00:31:43.105 --rc geninfo_all_blocks=1 00:31:43.105 --rc geninfo_unexecuted_blocks=1 00:31:43.105 00:31:43.105 ' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:43.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.105 --rc genhtml_branch_coverage=1 00:31:43.105 --rc genhtml_function_coverage=1 00:31:43.105 --rc genhtml_legend=1 00:31:43.105 --rc geninfo_all_blocks=1 00:31:43.105 --rc geninfo_unexecuted_blocks=1 00:31:43.105 00:31:43.105 ' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:43.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.105 --rc genhtml_branch_coverage=1 00:31:43.105 --rc genhtml_function_coverage=1 00:31:43.105 --rc genhtml_legend=1 00:31:43.105 --rc geninfo_all_blocks=1 00:31:43.105 --rc geninfo_unexecuted_blocks=1 00:31:43.105 00:31:43.105 ' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:43.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:43.105 --rc genhtml_branch_coverage=1 00:31:43.105 --rc genhtml_function_coverage=1 00:31:43.105 --rc genhtml_legend=1 00:31:43.105 --rc geninfo_all_blocks=1 00:31:43.105 --rc geninfo_unexecuted_blocks=1 00:31:43.105 00:31:43.105 ' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.105 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:43.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2922857 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2922857 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2922857 ']' 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.106 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.106 [2024-11-07 11:00:10.626239] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:31:43.106 [2024-11-07 11:00:10.626284] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922857 ] 00:31:43.106 [2024-11-07 11:00:10.688555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:43.106 [2024-11-07 11:00:10.730894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.106 [2024-11-07 11:00:10.730897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:43.363 11:00:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:43.363 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:43.363 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:43.363 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:43.363 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:43.363 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:43.363 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:43.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:43.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:43.363 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:43.363 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:43.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:43.364 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:43.364 ' 00:31:45.882 [2024-11-07 11:00:13.530038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.250 [2024-11-07 11:00:14.870652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:49.772 [2024-11-07 11:00:17.358171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:52.296 [2024-11-07 11:00:19.528973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:53.669 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:53.669 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:53.669 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.669 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.669 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:53.669 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:53.669 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:53.669 11:00:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:54.234 11:00:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:54.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:54.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:54.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:54.234 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:54.234 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:54.234 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:54.234 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:54.234 ' 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:59.495 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:59.495 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:59.495 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:59.495 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2922857 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2922857 ']' 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2922857 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2922857 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2922857' 00:31:59.495 killing process with pid 2922857 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2922857 00:31:59.495 11:00:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2922857 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2922857 ']' 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2922857 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2922857 ']' 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2922857 00:31:59.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2922857) - No such process 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2922857 is not found' 00:31:59.495 Process with pid 2922857 is not found 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:59.495 00:31:59.495 real 0m16.696s 00:31:59.495 user 0m36.004s 00:31:59.495 sys 0m0.769s 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.495 11:00:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.495 ************************************ 00:31:59.495 END TEST spdkcli_nvmf_tcp 00:31:59.495 ************************************ 00:31:59.495 11:00:27 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:59.495 11:00:27 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:59.495 11:00:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.495 11:00:27 -- common/autotest_common.sh@10 -- # set +x 00:31:59.495 ************************************ 00:31:59.495 START TEST nvmf_identify_passthru 00:31:59.495 ************************************ 00:31:59.495 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:59.754 * Looking for test storage... 00:31:59.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.754 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:59.754 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:31:59.754 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.754 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:59.754 11:00:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.755 --rc genhtml_branch_coverage=1 00:31:59.755 --rc genhtml_function_coverage=1 00:31:59.755 --rc genhtml_legend=1 00:31:59.755 --rc geninfo_all_blocks=1 00:31:59.755 --rc geninfo_unexecuted_blocks=1 00:31:59.755 00:31:59.755 ' 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.755 --rc genhtml_branch_coverage=1 00:31:59.755 --rc genhtml_function_coverage=1 00:31:59.755 --rc genhtml_legend=1 00:31:59.755 --rc geninfo_all_blocks=1 00:31:59.755 --rc geninfo_unexecuted_blocks=1 00:31:59.755 00:31:59.755 ' 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.755 --rc genhtml_branch_coverage=1 00:31:59.755 --rc genhtml_function_coverage=1 00:31:59.755 --rc genhtml_legend=1 00:31:59.755 --rc geninfo_all_blocks=1 00:31:59.755 --rc geninfo_unexecuted_blocks=1 00:31:59.755 00:31:59.755 ' 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.755 --rc genhtml_branch_coverage=1 00:31:59.755 --rc genhtml_function_coverage=1 00:31:59.755 --rc genhtml_legend=1 00:31:59.755 --rc geninfo_all_blocks=1 00:31:59.755 --rc geninfo_unexecuted_blocks=1 00:31:59.755 00:31:59.755 ' 00:31:59.755 11:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:59.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.755 11:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:59.755 11:00:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.755 11:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.755 11:00:27 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.755 11:00:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:05.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:05.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.015 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:05.016 Found net devices under 0000:86:00.0: cvl_0_0 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:05.016 Found net devices under 0000:86:00.1: cvl_0_1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.016 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:05.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:32:05.274 00:32:05.274 --- 10.0.0.2 ping statistics --- 00:32:05.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.274 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:32:05.274 00:32:05.274 --- 10.0.0.1 ping statistics --- 00:32:05.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.274 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:05.274 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:05.275 11:00:32 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:32:05.275 11:00:32 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:05.275 11:00:32 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:09.457 11:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:09.457 11:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:09.457 11:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:09.457 11:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:13.635 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:13.635 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:13.635 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.635 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.635 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:13.635 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.635 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.635 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2929900 00:32:13.636 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:13.636 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:13.636 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2929900 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 2929900 ']' 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:13.636 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.636 [2024-11-07 11:00:41.192175] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:32:13.636 [2024-11-07 11:00:41.192228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.636 [2024-11-07 11:00:41.258240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:13.636 [2024-11-07 11:00:41.301383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.636 [2024-11-07 11:00:41.301420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.636 [2024-11-07 11:00:41.301428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.636 [2024-11-07 11:00:41.301437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.636 [2024-11-07 11:00:41.301443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.636 [2024-11-07 11:00:41.302930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:13.636 [2024-11-07 11:00:41.302951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:13.893 [2024-11-07 11:00:41.302978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:13.893 [2024-11-07 11:00:41.302980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:32:13.893 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.893 INFO: Log level set to 20 00:32:13.893 INFO: Requests: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "method": "nvmf_set_config", 00:32:13.893 "id": 1, 00:32:13.893 "params": { 00:32:13.893 "admin_cmd_passthru": { 00:32:13.893 "identify_ctrlr": true 00:32:13.893 } 00:32:13.893 } 00:32:13.893 } 00:32:13.893 00:32:13.893 INFO: response: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "id": 1, 00:32:13.893 "result": true 00:32:13.893 } 00:32:13.893 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.893 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.893 INFO: Setting log level to 20 00:32:13.893 INFO: Setting log level to 20 00:32:13.893 INFO: Log level set to 20 00:32:13.893 INFO: Log level set to 20 00:32:13.893 INFO: Requests: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "method": "framework_start_init", 00:32:13.893 "id": 1 00:32:13.893 } 00:32:13.893 00:32:13.893 INFO: Requests: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "method": "framework_start_init", 00:32:13.893 "id": 1 00:32:13.893 } 00:32:13.893 00:32:13.893 [2024-11-07 11:00:41.422127] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:13.893 INFO: response: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "id": 1, 00:32:13.893 "result": true 00:32:13.893 } 00:32:13.893 00:32:13.893 INFO: response: 00:32:13.893 { 00:32:13.893 "jsonrpc": "2.0", 00:32:13.893 "id": 1, 00:32:13.893 "result": true 00:32:13.893 } 00:32:13.893 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.893 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.893 INFO: Setting log level to 40 00:32:13.893 INFO: Setting log level to 40 00:32:13.893 INFO: Setting log level to 40 00:32:13.893 [2024-11-07 11:00:41.435466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.893 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:13.893 11:00:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.893 11:00:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 Nvme0n1 00:32:17.167 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.167 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:17.167 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.167 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 [2024-11-07 11:00:44.341835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 [ 00:32:17.168 { 00:32:17.168 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:17.168 "subtype": "Discovery", 00:32:17.168 "listen_addresses": [], 00:32:17.168 "allow_any_host": true, 00:32:17.168 "hosts": [] 00:32:17.168 }, 00:32:17.168 { 00:32:17.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:17.168 "subtype": "NVMe", 00:32:17.168 "listen_addresses": [ 00:32:17.168 { 00:32:17.168 "trtype": "TCP", 00:32:17.168 "adrfam": "IPv4", 00:32:17.168 "traddr": "10.0.0.2", 00:32:17.168 "trsvcid": "4420" 00:32:17.168 } 00:32:17.168 ], 00:32:17.168 "allow_any_host": true, 00:32:17.168 "hosts": [], 00:32:17.168 "serial_number": "SPDK00000000000001", 00:32:17.168 "model_number": "SPDK bdev Controller", 00:32:17.168 "max_namespaces": 1, 00:32:17.168 "min_cntlid": 1, 00:32:17.168 "max_cntlid": 65519, 00:32:17.168 "namespaces": [ 00:32:17.168 { 00:32:17.168 "nsid": 1, 00:32:17.168 "bdev_name": "Nvme0n1", 00:32:17.168 "name": "Nvme0n1", 00:32:17.168 "nguid": "883283279D834590AE9A56DD3B0BF56C", 00:32:17.168 "uuid": "88328327-9d83-4590-ae9a-56dd3b0bf56c" 00:32:17.168 } 00:32:17.168 ] 00:32:17.168 } 00:32:17.168 ] 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:17.168 11:00:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:17.168 rmmod nvme_tcp 00:32:17.168 rmmod nvme_fabrics 00:32:17.168 rmmod nvme_keyring 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2929900 ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2929900 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 2929900 ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 2929900 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2929900 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2929900' 00:32:17.168 killing process with pid 2929900 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 2929900 00:32:17.168 11:00:44 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 2929900 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.539 11:00:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.539 11:00:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:18.539 11:00:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.072 11:00:48 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:21.072 00:32:21.072 real 0m21.114s 00:32:21.072 user 0m26.070s 00:32:21.072 sys 0m5.721s 00:32:21.072 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:21.072 11:00:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:21.072 ************************************ 00:32:21.072 END TEST nvmf_identify_passthru 00:32:21.072 ************************************ 00:32:21.072 11:00:48 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:21.072 11:00:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:21.072 11:00:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:21.072 11:00:48 -- common/autotest_common.sh@10 -- # set +x 00:32:21.072 ************************************ 00:32:21.072 START TEST nvmf_dif 00:32:21.072 ************************************ 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:21.072 * Looking for test storage... 00:32:21.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:21.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.072 --rc genhtml_branch_coverage=1 00:32:21.072 --rc genhtml_function_coverage=1 00:32:21.072 --rc genhtml_legend=1 00:32:21.072 --rc geninfo_all_blocks=1 00:32:21.072 --rc geninfo_unexecuted_blocks=1 00:32:21.072 00:32:21.072 ' 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:21.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.072 --rc genhtml_branch_coverage=1 00:32:21.072 --rc genhtml_function_coverage=1 00:32:21.072 --rc genhtml_legend=1 00:32:21.072 --rc geninfo_all_blocks=1 00:32:21.072 --rc geninfo_unexecuted_blocks=1 00:32:21.072 00:32:21.072 ' 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:21.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.072 --rc genhtml_branch_coverage=1 00:32:21.072 --rc genhtml_function_coverage=1 00:32:21.072 --rc genhtml_legend=1 00:32:21.072 --rc geninfo_all_blocks=1 00:32:21.072 --rc geninfo_unexecuted_blocks=1 00:32:21.072 00:32:21.072 ' 00:32:21.072 11:00:48 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:21.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:21.072 --rc genhtml_branch_coverage=1 00:32:21.072 --rc genhtml_function_coverage=1 00:32:21.072 --rc genhtml_legend=1 00:32:21.072 --rc geninfo_all_blocks=1 00:32:21.072 --rc geninfo_unexecuted_blocks=1 00:32:21.072 00:32:21.072 ' 00:32:21.072 11:00:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.072 11:00:48 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.072 11:00:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.073 11:00:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.073 11:00:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.073 11:00:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.073 11:00:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:32:21.073 11:00:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:21.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:21.073 11:00:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:32:21.073 11:00:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:21.073 11:00:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:21.073 11:00:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:32:21.073 11:00:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.073 11:00:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:21.073 11:00:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:21.073 11:00:48 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:32:21.073 11:00:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:26.337 11:00:53 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:26.338 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:26.338 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:26.338 Found net devices under 0000:86:00.0: cvl_0_0 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:26.338 Found net devices under 0000:86:00.1: cvl_0_1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:26.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.471 ms 00:32:26.338 00:32:26.338 --- 10.0.0.2 ping statistics --- 00:32:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.338 rtt min/avg/max/mdev = 0.471/0.471/0.471/0.000 ms 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:26.338 00:32:26.338 --- 10.0.0.1 ping statistics --- 00:32:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.338 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:32:26.338 11:00:53 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.865 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:28.865 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:28.865 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:28.865 11:00:56 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:28.865 11:00:56 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2935338 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2935338 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 2935338 ']' 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:28.865 11:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:28.865 11:00:56 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:28.865 [2024-11-07 11:00:56.354705] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:32:28.865 [2024-11-07 11:00:56.354754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.865 [2024-11-07 11:00:56.422190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.865 [2024-11-07 11:00:56.463706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:28.865 [2024-11-07 11:00:56.463744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:28.865 [2024-11-07 11:00:56.463752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:28.865 [2024-11-07 11:00:56.463757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:28.865 [2024-11-07 11:00:56.463762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:28.865 [2024-11-07 11:00:56.464313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:32:29.123 11:00:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 11:00:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.123 11:00:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:32:29.123 11:00:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 [2024-11-07 11:00:56.594622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.123 11:00:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 ************************************ 00:32:29.123 START TEST fio_dif_1_default 00:32:29.123 ************************************ 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 bdev_null0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:29.123 [2024-11-07 11:00:56.662927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:29.123 { 00:32:29.123 "params": { 00:32:29.123 "name": "Nvme$subsystem", 00:32:29.123 "trtype": "$TEST_TRANSPORT", 00:32:29.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:29.123 "adrfam": "ipv4", 00:32:29.123 "trsvcid": "$NVMF_PORT", 00:32:29.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:29.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:29.123 "hdgst": ${hdgst:-false}, 00:32:29.123 "ddgst": ${ddgst:-false} 00:32:29.123 }, 00:32:29.123 "method": "bdev_nvme_attach_controller" 00:32:29.123 } 00:32:29.123 EOF 00:32:29.123 )") 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:29.123 "params": { 00:32:29.123 "name": "Nvme0", 00:32:29.123 "trtype": "tcp", 00:32:29.123 "traddr": "10.0.0.2", 00:32:29.123 "adrfam": "ipv4", 00:32:29.123 "trsvcid": "4420", 00:32:29.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:29.123 "hdgst": false, 00:32:29.123 "ddgst": false 00:32:29.123 }, 00:32:29.123 "method": "bdev_nvme_attach_controller" 00:32:29.123 }' 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:29.123 11:00:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:29.380 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:29.380 fio-3.35 00:32:29.380 Starting 1 thread 00:32:41.565 00:32:41.565 filename0: (groupid=0, jobs=1): err= 0: pid=2935708: Thu Nov 7 11:01:07 2024 00:32:41.565 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:32:41.565 slat (nsec): min=5591, max=26695, avg=6155.46, stdev=987.93 00:32:41.565 clat (usec): min=40812, max=45894, avg=41027.63, stdev=351.91 00:32:41.565 lat (usec): min=40818, max=45921, avg=41033.79, stdev=352.34 00:32:41.565 clat percentiles (usec): 00:32:41.565 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:41.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:41.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:41.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:32:41.565 | 99.99th=[45876] 00:32:41.565 bw ( KiB/s): min= 384, max= 416, per=99.53%, avg=388.80, stdev=11.72, samples=20 00:32:41.565 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:32:41.565 lat (msec) : 50=100.00% 00:32:41.565 cpu : usr=91.96%, sys=7.79%, ctx=15, majf=0, minf=0 00:32:41.565 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:41.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.565 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.565 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:41.565 00:32:41.565 Run status group 0 (all jobs): 00:32:41.565 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10015-10015msec 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 00:32:41.565 real 0m11.209s 00:32:41.565 user 0m15.934s 00:32:41.565 sys 0m1.064s 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 ************************************ 00:32:41.565 END TEST fio_dif_1_default 00:32:41.565 ************************************ 00:32:41.565 11:01:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:41.565 11:01:07 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:41.565 11:01:07 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 ************************************ 00:32:41.565 START TEST fio_dif_1_multi_subsystems 00:32:41.565 ************************************ 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 bdev_null0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 [2024-11-07 11:01:07.933341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 bdev_null1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:41.565 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.565 { 00:32:41.565 "params": { 00:32:41.565 "name": "Nvme$subsystem", 00:32:41.566 "trtype": "$TEST_TRANSPORT", 00:32:41.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.566 "adrfam": "ipv4", 00:32:41.566 "trsvcid": "$NVMF_PORT", 00:32:41.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.566 "hdgst": ${hdgst:-false}, 00:32:41.566 "ddgst": ${ddgst:-false} 00:32:41.566 }, 00:32:41.566 "method": "bdev_nvme_attach_controller" 00:32:41.566 } 00:32:41.566 EOF 00:32:41.566 )") 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:41.566 { 00:32:41.566 "params": { 00:32:41.566 "name": "Nvme$subsystem", 00:32:41.566 "trtype": "$TEST_TRANSPORT", 00:32:41.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.566 "adrfam": "ipv4", 00:32:41.566 "trsvcid": "$NVMF_PORT", 00:32:41.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.566 "hdgst": ${hdgst:-false}, 00:32:41.566 "ddgst": ${ddgst:-false} 00:32:41.566 }, 00:32:41.566 "method": "bdev_nvme_attach_controller" 00:32:41.566 } 00:32:41.566 EOF 00:32:41.566 )") 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:32:41.566 11:01:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:41.566 "params": { 00:32:41.566 "name": "Nvme0", 00:32:41.566 "trtype": "tcp", 00:32:41.566 "traddr": "10.0.0.2", 00:32:41.566 "adrfam": "ipv4", 00:32:41.566 "trsvcid": "4420", 00:32:41.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:41.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:41.566 "hdgst": false, 00:32:41.566 "ddgst": false 00:32:41.566 }, 00:32:41.566 "method": "bdev_nvme_attach_controller" 00:32:41.566 },{ 00:32:41.566 "params": { 00:32:41.566 "name": "Nvme1", 00:32:41.566 "trtype": "tcp", 00:32:41.566 "traddr": "10.0.0.2", 00:32:41.566 "adrfam": "ipv4", 00:32:41.566 "trsvcid": "4420", 00:32:41.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.566 "hdgst": false, 00:32:41.566 "ddgst": false 00:32:41.566 }, 00:32:41.566 "method": "bdev_nvme_attach_controller" 00:32:41.566 }' 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:41.566 11:01:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:41.566 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:41.566 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:41.566 fio-3.35 00:32:41.566 Starting 2 threads 00:32:53.750 00:32:53.750 filename0: (groupid=0, jobs=1): err= 0: pid=2937676: Thu Nov 7 11:01:19 2024 00:32:53.750 read: IOPS=189, BW=757KiB/s (775kB/s)(7584KiB/10015msec) 00:32:53.750 slat (nsec): min=5931, max=53678, avg=7113.78, stdev=2253.08 00:32:53.750 clat (usec): min=412, max=42614, avg=21108.30, stdev=20552.82 00:32:53.750 lat (usec): min=418, max=42620, avg=21115.41, stdev=20552.22 00:32:53.750 clat percentiles (usec): 00:32:53.750 | 1.00th=[ 420], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:32:53.750 | 30.00th=[ 453], 40.00th=[ 537], 50.00th=[40633], 60.00th=[41157], 00:32:53.750 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:32:53.750 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:53.750 | 99.99th=[42730] 00:32:53.750 bw ( KiB/s): min= 672, max= 768, per=49.80%, avg=756.80, stdev=28.00, samples=20 00:32:53.750 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:32:53.750 lat (usec) : 500=39.50%, 750=10.28% 00:32:53.750 lat (msec) : 50=50.21% 00:32:53.750 cpu : usr=96.54%, sys=3.21%, ctx=8, majf=0, minf=9 00:32:53.750 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.750 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.750 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:53.750 filename1: (groupid=0, jobs=1): err= 0: pid=2937677: Thu Nov 7 11:01:19 2024 00:32:53.750 read: IOPS=190, BW=761KiB/s (780kB/s)(7632KiB/10024msec) 00:32:53.750 slat (nsec): min=5959, max=52436, avg=7144.56, stdev=2395.58 00:32:53.750 clat (usec): min=411, max=42626, avg=20994.05, stdev=20567.50 00:32:53.750 lat (usec): min=417, max=42632, avg=21001.19, stdev=20566.81 00:32:53.750 clat percentiles (usec): 00:32:53.750 | 1.00th=[ 420], 5.00th=[ 429], 10.00th=[ 433], 20.00th=[ 441], 00:32:53.750 | 30.00th=[ 449], 40.00th=[ 474], 50.00th=[ 1631], 60.00th=[41157], 00:32:53.750 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42730], 00:32:53.750 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:53.750 | 99.99th=[42730] 00:32:53.750 bw ( KiB/s): min= 704, max= 768, per=50.13%, avg=761.60, stdev=19.70, samples=20 00:32:53.750 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:32:53.750 lat (usec) : 500=41.09%, 750=8.75%, 1000=0.05% 00:32:53.750 lat (msec) : 2=0.21%, 50=49.90% 00:32:53.750 cpu : usr=96.41%, sys=3.34%, ctx=9, majf=0, minf=11 00:32:53.750 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.750 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.750 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:53.750 00:32:53.750 Run status group 0 (all jobs): 00:32:53.750 READ: bw=1518KiB/s (1554kB/s), 757KiB/s-761KiB/s (775kB/s-780kB/s), io=14.9MiB (15.6MB), run=10015-10024msec 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:53.750 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 00:32:53.751 real 0m11.679s 00:32:53.751 user 0m26.108s 00:32:53.751 sys 0m1.044s 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 ************************************ 00:32:53.751 END TEST fio_dif_1_multi_subsystems 00:32:53.751 ************************************ 00:32:53.751 11:01:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:53.751 11:01:19 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:53.751 11:01:19 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 ************************************ 00:32:53.751 START TEST fio_dif_rand_params 00:32:53.751 ************************************ 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 bdev_null0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:53.751 [2024-11-07 11:01:19.688629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:53.751 { 00:32:53.751 "params": { 00:32:53.751 "name": "Nvme$subsystem", 00:32:53.751 "trtype": "$TEST_TRANSPORT", 00:32:53.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:53.751 "adrfam": "ipv4", 00:32:53.751 "trsvcid": "$NVMF_PORT", 00:32:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:53.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:53.751 "hdgst": ${hdgst:-false}, 00:32:53.751 "ddgst": ${ddgst:-false} 00:32:53.751 }, 00:32:53.751 "method": "bdev_nvme_attach_controller" 00:32:53.751 } 00:32:53.751 EOF 00:32:53.751 )") 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:53.751 "params": { 00:32:53.751 "name": "Nvme0", 00:32:53.751 "trtype": "tcp", 00:32:53.751 "traddr": "10.0.0.2", 00:32:53.751 "adrfam": "ipv4", 00:32:53.751 "trsvcid": "4420", 00:32:53.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.751 "hdgst": false, 00:32:53.751 "ddgst": false 00:32:53.751 }, 00:32:53.751 "method": "bdev_nvme_attach_controller" 00:32:53.751 }' 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:53.751 11:01:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:53.751 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:53.751 ... 00:32:53.751 fio-3.35 00:32:53.751 Starting 3 threads 00:32:59.007 00:32:59.007 filename0: (groupid=0, jobs=1): err= 0: pid=2939580: Thu Nov 7 11:01:25 2024 00:32:59.007 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(209MiB/5048msec) 00:32:59.007 slat (nsec): min=2928, max=60438, avg=17577.17, stdev=5620.05 00:32:59.007 clat (usec): min=3341, max=55893, avg=9030.42, stdev=5560.36 00:32:59.007 lat (usec): min=3350, max=55903, avg=9047.99, stdev=5560.20 00:32:59.007 clat percentiles (usec): 00:32:59.007 | 1.00th=[ 4080], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7373], 00:32:59.007 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:32:59.007 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10421], 00:32:59.007 | 99.00th=[49021], 99.50th=[49546], 99.90th=[55837], 99.95th=[55837], 00:32:59.007 | 99.99th=[55837] 00:32:59.007 bw ( KiB/s): min=39936, max=46080, per=35.57%, avg=42649.60, stdev=1961.55, samples=10 00:32:59.007 iops : min= 312, max= 360, avg=333.20, stdev=15.32, samples=10 00:32:59.007 lat (msec) : 4=0.96%, 10=89.87%, 20=7.43%, 50=1.38%, 100=0.36% 00:32:59.007 cpu : usr=96.22%, sys=3.43%, ctx=10, majf=0, minf=72 00:32:59.007 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.007 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.007 filename0: (groupid=0, jobs=1): err= 0: pid=2939581: Thu Nov 7 11:01:25 2024 00:32:59.007 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(182MiB/5005msec) 00:32:59.007 slat (nsec): min=4587, max=51932, avg=18206.42, stdev=8791.44 00:32:59.007 clat (usec): min=4004, max=89382, avg=10282.10, stdev=6684.31 00:32:59.007 lat (usec): min=4012, max=89394, avg=10300.31, stdev=6683.52 00:32:59.007 clat percentiles (usec): 00:32:59.007 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 8160], 00:32:59.007 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:32:59.007 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:32:59.007 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[89654], 00:32:59.007 | 99.99th=[89654] 00:32:59.007 bw ( KiB/s): min=20264, max=40704, per=31.07%, avg=37252.00, stdev=6361.62, samples=10 00:32:59.007 iops : min= 158, max= 318, avg=291.00, stdev=49.79, samples=10 00:32:59.007 lat (msec) : 10=65.27%, 20=32.12%, 50=2.20%, 100=0.41% 00:32:59.007 cpu : usr=95.94%, sys=3.74%, ctx=8, majf=0, minf=51 00:32:59.007 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 issued rwts: total=1457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.007 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.007 filename0: (groupid=0, jobs=1): err= 0: pid=2939582: Thu Nov 7 11:01:25 2024 00:32:59.007 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(200MiB/5002msec) 00:32:59.007 slat (nsec): min=6123, max=57107, avg=18584.16, stdev=9026.57 00:32:59.007 clat (usec): min=3189, max=51700, avg=9339.70, stdev=4311.68 00:32:59.007 lat (usec): min=3197, max=51708, avg=9358.29, stdev=4312.10 00:32:59.007 clat percentiles (usec): 00:32:59.007 | 1.00th=[ 4359], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 7373], 00:32:59.007 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:32:59.007 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11469], 00:32:59.007 | 99.00th=[12649], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:32:59.007 | 99.99th=[51643] 00:32:59.007 bw ( KiB/s): min=38144, max=44544, per=34.11%, avg=40893.33, stdev=2209.52, samples=9 00:32:59.007 iops : min= 298, max= 348, avg=319.44, stdev=17.21, samples=9 00:32:59.007 lat (msec) : 4=0.50%, 10=68.87%, 20=29.69%, 50=0.25%, 100=0.69% 00:32:59.007 cpu : usr=95.60%, sys=4.08%, ctx=8, majf=0, minf=22 00:32:59.007 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:59.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:59.007 issued rwts: total=1603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:59.007 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:59.007 00:32:59.007 Run status group 0 (all jobs): 00:32:59.007 READ: bw=117MiB/s (123MB/s), 36.4MiB/s-41.3MiB/s (38.2MB/s-43.3MB/s), io=591MiB (620MB), run=5002-5048msec 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 bdev_null0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 [2024-11-07 11:01:25.841210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 bdev_null1 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.007 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 bdev_null2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.008 { 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme$subsystem", 00:32:59.008 "trtype": "$TEST_TRANSPORT", 00:32:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "$NVMF_PORT", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.008 "hdgst": ${hdgst:-false}, 00:32:59.008 "ddgst": ${ddgst:-false} 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 } 00:32:59.008 EOF 00:32:59.008 )") 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.008 { 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme$subsystem", 00:32:59.008 "trtype": "$TEST_TRANSPORT", 00:32:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "$NVMF_PORT", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.008 "hdgst": ${hdgst:-false}, 00:32:59.008 "ddgst": ${ddgst:-false} 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 } 00:32:59.008 EOF 00:32:59.008 )") 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:59.008 { 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme$subsystem", 00:32:59.008 "trtype": "$TEST_TRANSPORT", 00:32:59.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "$NVMF_PORT", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:59.008 "hdgst": ${hdgst:-false}, 00:32:59.008 "ddgst": ${ddgst:-false} 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 } 00:32:59.008 EOF 00:32:59.008 )") 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme0", 00:32:59.008 "trtype": "tcp", 00:32:59.008 "traddr": "10.0.0.2", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "4420", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.008 "hdgst": false, 00:32:59.008 "ddgst": false 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 },{ 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme1", 00:32:59.008 "trtype": "tcp", 00:32:59.008 "traddr": "10.0.0.2", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "4420", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:59.008 "hdgst": false, 00:32:59.008 "ddgst": false 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 },{ 00:32:59.008 "params": { 00:32:59.008 "name": "Nvme2", 00:32:59.008 "trtype": "tcp", 00:32:59.008 "traddr": "10.0.0.2", 00:32:59.008 "adrfam": "ipv4", 00:32:59.008 "trsvcid": "4420", 00:32:59.008 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:59.008 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:59.008 "hdgst": false, 00:32:59.008 "ddgst": false 00:32:59.008 }, 00:32:59.008 "method": "bdev_nvme_attach_controller" 00:32:59.008 }' 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:59.008 11:01:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:59.009 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:59.009 ... 00:32:59.009 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:59.009 ... 00:32:59.009 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:59.009 ... 00:32:59.009 fio-3.35 00:32:59.009 Starting 24 threads 00:33:11.199 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940696: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.199 slat (nsec): min=8083, max=50835, avg=22835.35, stdev=6908.93 00:33:11.199 clat (usec): min=11394, max=29217, avg=27816.64, stdev=1135.92 00:33:11.199 lat (usec): min=11410, max=29232, avg=27839.48, stdev=1135.66 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:33:11.199 | 99.99th=[29230] 00:33:11.199 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.199 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.199 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.199 cpu : usr=98.54%, sys=1.06%, ctx=14, majf=0, minf=28 00:33:11.199 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940697: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:11.199 slat (nsec): min=7672, max=67656, avg=20867.44, stdev=6559.67 00:33:11.199 clat (usec): min=10970, max=60133, avg=27913.17, stdev=2074.75 00:33:11.199 lat (usec): min=10979, max=60149, avg=27934.04, stdev=2075.22 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28443], 99.50th=[28967], 99.90th=[60031], 99.95th=[60031], 00:33:11.199 | 99.99th=[60031] 00:33:11.199 bw ( KiB/s): min= 2052, max= 2432, per=4.15%, avg=2272.20, stdev=81.18, samples=20 00:33:11.199 iops : min= 513, max= 608, avg=568.05, stdev=20.29, samples=20 00:33:11.199 lat (msec) : 20=0.84%, 50=98.88%, 100=0.28% 00:33:11.199 cpu : usr=98.70%, sys=0.92%, ctx=17, majf=0, minf=35 00:33:11.199 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:11.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940698: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:11.199 slat (nsec): min=5989, max=57654, avg=21486.46, stdev=7885.56 00:33:11.199 clat (usec): min=15203, max=43521, avg=27891.48, stdev=1185.95 00:33:11.199 lat (usec): min=15252, max=43538, avg=27912.96, stdev=1185.22 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28443], 99.50th=[28967], 99.90th=[43254], 99.95th=[43254], 00:33:11.199 | 99.99th=[43779] 00:33:11.199 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:11.199 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:11.199 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.199 cpu : usr=98.53%, sys=1.09%, ctx=14, majf=0, minf=28 00:33:11.199 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940699: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.199 slat (nsec): min=8516, max=59099, avg=21600.15, stdev=6777.58 00:33:11.199 clat (usec): min=11355, max=29264, avg=27815.50, stdev=1133.18 00:33:11.199 lat (usec): min=11371, max=29285, avg=27837.10, stdev=1133.20 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:33:11.199 | 99.99th=[29230] 00:33:11.199 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.199 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.199 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.199 cpu : usr=98.75%, sys=0.87%, ctx=12, majf=0, minf=37 00:33:11.199 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940700: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:11.199 slat (nsec): min=6769, max=86977, avg=26300.68, stdev=14426.28 00:33:11.199 clat (usec): min=11017, max=64744, avg=27860.54, stdev=1849.00 00:33:11.199 lat (usec): min=11023, max=64761, avg=27886.84, stdev=1848.43 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[26608], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28705], 99.50th=[38536], 99.90th=[49021], 99.95th=[64750], 00:33:11.199 | 99.99th=[64750] 00:33:11.199 bw ( KiB/s): min= 2052, max= 2432, per=4.15%, avg=2272.20, stdev=81.18, samples=20 00:33:11.199 iops : min= 513, max= 608, avg=568.05, stdev=20.29, samples=20 00:33:11.199 lat (msec) : 20=0.84%, 50=99.10%, 100=0.05% 00:33:11.199 cpu : usr=98.56%, sys=1.04%, ctx=11, majf=0, minf=32 00:33:11.199 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:11.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.199 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.199 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.199 filename0: (groupid=0, jobs=1): err= 0: pid=2940701: Thu Nov 7 11:01:37 2024 00:33:11.199 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10017msec) 00:33:11.199 slat (nsec): min=6428, max=45353, avg=18896.75, stdev=5416.29 00:33:11.199 clat (usec): min=16835, max=34346, avg=27899.14, stdev=931.84 00:33:11.199 lat (usec): min=16852, max=34365, avg=27918.03, stdev=931.58 00:33:11.199 clat percentiles (usec): 00:33:11.199 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:11.199 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.199 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.199 | 99.00th=[28443], 99.50th=[28967], 99.90th=[34341], 99.95th=[34341], 00:33:11.199 | 99.99th=[34341] 00:33:11.199 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:11.199 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:11.199 lat (msec) : 20=0.72%, 50=99.28% 00:33:11.199 cpu : usr=98.44%, sys=1.19%, ctx=16, majf=0, minf=52 00:33:11.200 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename0: (groupid=0, jobs=1): err= 0: pid=2940702: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.200 slat (nsec): min=7179, max=66035, avg=19491.48, stdev=7388.22 00:33:11.200 clat (usec): min=11325, max=29272, avg=27860.16, stdev=1140.97 00:33:11.200 lat (usec): min=11345, max=29285, avg=27879.66, stdev=1140.04 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[26346], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:33:11.200 | 99.99th=[29230] 00:33:11.200 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.200 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.200 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.200 cpu : usr=98.49%, sys=1.14%, ctx=12, majf=0, minf=28 00:33:11.200 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename0: (groupid=0, jobs=1): err= 0: pid=2940703: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:33:11.200 slat (nsec): min=7034, max=75647, avg=24191.62, stdev=11040.05 00:33:11.200 clat (usec): min=11571, max=51222, avg=27881.83, stdev=1705.58 00:33:11.200 lat (usec): min=11601, max=51245, avg=27906.03, stdev=1705.50 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[28705], 99.50th=[28967], 99.90th=[51119], 99.95th=[51119], 00:33:11.200 | 99.99th=[51119] 00:33:11.200 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2270.53, stdev=71.25, samples=19 00:33:11.200 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:11.200 lat (msec) : 20=0.75%, 50=98.96%, 100=0.28% 00:33:11.200 cpu : usr=98.69%, sys=0.93%, ctx=12, majf=0, minf=31 00:33:11.200 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940704: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:33:11.200 slat (nsec): min=5644, max=78652, avg=22804.52, stdev=10326.05 00:33:11.200 clat (usec): min=11373, max=49242, avg=27902.35, stdev=2540.94 00:33:11.200 lat (usec): min=11382, max=49258, avg=27925.16, stdev=2540.69 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[18744], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:11.200 | 99.00th=[36963], 99.50th=[38011], 99.90th=[49021], 99.95th=[49021], 00:33:11.200 | 99.99th=[49021] 00:33:11.200 bw ( KiB/s): min= 2048, max= 2360, per=4.16%, avg=2274.80, stdev=69.85, samples=20 00:33:11.200 iops : min= 512, max= 590, avg=568.70, stdev=17.46, samples=20 00:33:11.200 lat (msec) : 20=2.98%, 50=97.02% 00:33:11.200 cpu : usr=98.63%, sys=0.99%, ctx=10, majf=0, minf=39 00:33:11.200 IO depths : 1=4.8%, 2=10.7%, 4=23.9%, 8=52.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940705: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.200 slat (nsec): min=7615, max=62029, avg=21071.02, stdev=6509.15 00:33:11.200 clat (usec): min=11344, max=29185, avg=27839.13, stdev=1139.18 00:33:11.200 lat (usec): min=11367, max=29202, avg=27860.20, stdev=1138.56 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28967], 99.95th=[29230], 00:33:11.200 | 99.99th=[29230] 00:33:11.200 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.200 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.200 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.200 cpu : usr=98.46%, sys=1.17%, ctx=8, majf=0, minf=32 00:33:11.200 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940706: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=569, BW=2279KiB/s (2334kB/s)(22.3MiB/10008msec) 00:33:11.200 slat (nsec): min=6965, max=75840, avg=24599.08, stdev=11248.28 00:33:11.200 clat (usec): min=11766, max=43463, avg=27857.40, stdev=1936.65 00:33:11.200 lat (usec): min=11792, max=43484, avg=27882.00, stdev=1937.14 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[18744], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[36963], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:33:11.200 | 99.99th=[43254] 00:33:11.200 bw ( KiB/s): min= 2160, max= 2304, per=4.15%, avg=2272.84, stdev=55.55, samples=19 00:33:11.200 iops : min= 540, max= 576, avg=568.21, stdev=13.89, samples=19 00:33:11.200 lat (msec) : 20=1.60%, 50=98.40% 00:33:11.200 cpu : usr=98.44%, sys=1.18%, ctx=15, majf=0, minf=53 00:33:11.200 IO depths : 1=5.4%, 2=11.5%, 4=24.6%, 8=51.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940707: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10012msec) 00:33:11.200 slat (nsec): min=6919, max=39033, avg=20106.66, stdev=5228.86 00:33:11.200 clat (usec): min=16725, max=29398, avg=27862.93, stdev=873.26 00:33:11.200 lat (usec): min=16746, max=29415, avg=27883.03, stdev=873.53 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.200 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[28443], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:33:11.200 | 99.99th=[29492] 00:33:11.200 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:11.200 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:11.200 lat (msec) : 20=0.72%, 50=99.28% 00:33:11.200 cpu : usr=98.44%, sys=1.20%, ctx=7, majf=0, minf=40 00:33:11.200 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940708: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.200 slat (nsec): min=7125, max=53922, avg=12716.52, stdev=6353.14 00:33:11.200 clat (usec): min=11463, max=38349, avg=27909.96, stdev=1305.94 00:33:11.200 lat (usec): min=11517, max=38357, avg=27922.68, stdev=1304.87 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[21365], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:33:11.200 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.200 | 99.00th=[28443], 99.50th=[28705], 99.90th=[38011], 99.95th=[38536], 00:33:11.200 | 99.99th=[38536] 00:33:11.200 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.200 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.200 lat (msec) : 20=0.77%, 50=99.23% 00:33:11.200 cpu : usr=98.49%, sys=1.13%, ctx=11, majf=0, minf=57 00:33:11.200 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:11.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.200 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.200 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.200 filename1: (groupid=0, jobs=1): err= 0: pid=2940709: Thu Nov 7 11:01:37 2024 00:33:11.200 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.200 slat (nsec): min=7538, max=57993, avg=21886.71, stdev=6434.40 00:33:11.200 clat (usec): min=11337, max=36491, avg=27823.69, stdev=1161.26 00:33:11.200 lat (usec): min=11353, max=36520, avg=27845.58, stdev=1161.12 00:33:11.200 clat percentiles (usec): 00:33:11.200 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.200 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:33:11.201 | 99.99th=[36439] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.201 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.201 lat (msec) : 20=0.60%, 50=99.40% 00:33:11.201 cpu : usr=98.50%, sys=1.13%, ctx=13, majf=0, minf=31 00:33:11.201 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename1: (groupid=0, jobs=1): err= 0: pid=2940710: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:11.201 slat (nsec): min=6997, max=88182, avg=14849.31, stdev=7820.25 00:33:11.201 clat (usec): min=17303, max=41889, avg=27979.41, stdev=1412.92 00:33:11.201 lat (usec): min=17316, max=41913, avg=27994.26, stdev=1412.50 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[18744], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[33162], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:33:11.201 | 99.99th=[41681] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:11.201 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:11.201 lat (msec) : 20=1.14%, 50=98.86% 00:33:11.201 cpu : usr=98.61%, sys=0.98%, ctx=27, majf=0, minf=54 00:33:11.201 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename1: (groupid=0, jobs=1): err= 0: pid=2940711: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:11.201 slat (nsec): min=5220, max=80574, avg=24026.36, stdev=10707.10 00:33:11.201 clat (usec): min=11524, max=59766, avg=27874.32, stdev=1598.38 00:33:11.201 lat (usec): min=11538, max=59782, avg=27898.35, stdev=1598.08 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[28967], 99.90th=[49021], 99.95th=[49021], 00:33:11.201 | 99.99th=[59507] 00:33:11.201 bw ( KiB/s): min= 2048, max= 2416, per=4.16%, avg=2277.60, stdev=77.22, samples=20 00:33:11.201 iops : min= 512, max= 604, avg=569.40, stdev=19.30, samples=20 00:33:11.201 lat (msec) : 20=0.60%, 50=99.37%, 100=0.04% 00:33:11.201 cpu : usr=98.43%, sys=1.20%, ctx=19, majf=0, minf=44 00:33:11.201 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940712: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10012msec) 00:33:11.201 slat (nsec): min=6536, max=40596, avg=18889.74, stdev=5735.00 00:33:11.201 clat (usec): min=17052, max=35821, avg=27886.30, stdev=974.42 00:33:11.201 lat (usec): min=17060, max=35844, avg=27905.19, stdev=974.78 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[22938], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[29230], 99.90th=[35914], 99.95th=[35914], 00:33:11.201 | 99.99th=[35914] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:11.201 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:11.201 lat (msec) : 20=0.72%, 50=99.28% 00:33:11.201 cpu : usr=98.60%, sys=1.03%, ctx=13, majf=0, minf=33 00:33:11.201 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940713: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:33:11.201 slat (nsec): min=5037, max=43766, avg=21055.35, stdev=6164.69 00:33:11.201 clat (usec): min=16877, max=44322, avg=27920.25, stdev=1189.06 00:33:11.201 lat (usec): min=16902, max=44337, avg=27941.30, stdev=1188.69 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[27657], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[29230], 99.90th=[44303], 99.95th=[44303], 00:33:11.201 | 99.99th=[44303] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:11.201 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:11.201 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.201 cpu : usr=98.42%, sys=1.20%, ctx=9, majf=0, minf=36 00:33:11.201 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940714: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:33:11.201 slat (nsec): min=11543, max=58708, avg=21552.06, stdev=6195.51 00:33:11.201 clat (usec): min=11326, max=29257, avg=27823.90, stdev=1137.13 00:33:11.201 lat (usec): min=11340, max=29271, avg=27845.46, stdev=1136.96 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[26084], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:33:11.201 | 99.99th=[29230] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2283.79, stdev=64.19, samples=19 00:33:11.201 iops : min= 544, max= 608, avg=570.95, stdev=16.05, samples=19 00:33:11.201 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.201 cpu : usr=98.50%, sys=1.13%, ctx=8, majf=0, minf=26 00:33:11.201 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940715: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=570, BW=2282KiB/s (2337kB/s)(22.3MiB/10011msec) 00:33:11.201 slat (nsec): min=6140, max=42470, avg=20492.56, stdev=6119.40 00:33:11.201 clat (usec): min=11053, max=49395, avg=27851.10, stdev=1420.29 00:33:11.201 lat (usec): min=11064, max=49412, avg=27871.59, stdev=1420.48 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[20841], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[29230], 99.90th=[36963], 99.95th=[49546], 00:33:11.201 | 99.99th=[49546] 00:33:11.201 bw ( KiB/s): min= 2176, max= 2407, per=4.16%, avg=2277.15, stdev=64.12, samples=20 00:33:11.201 iops : min= 544, max= 601, avg=569.25, stdev=15.95, samples=20 00:33:11.201 lat (msec) : 20=0.84%, 50=99.16% 00:33:11.201 cpu : usr=98.58%, sys=1.04%, ctx=12, majf=0, minf=30 00:33:11.201 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940716: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=579, BW=2318KiB/s (2374kB/s)(22.7MiB/10021msec) 00:33:11.201 slat (nsec): min=3143, max=52134, avg=14213.09, stdev=4730.24 00:33:11.201 clat (usec): min=1373, max=29380, avg=27487.25, stdev=3338.80 00:33:11.201 lat (usec): min=1380, max=29392, avg=27501.47, stdev=3339.55 00:33:11.201 clat percentiles (usec): 00:33:11.201 | 1.00th=[ 1450], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:11.201 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.201 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.201 | 99.00th=[28443], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:33:11.201 | 99.99th=[29492] 00:33:11.201 bw ( KiB/s): min= 2176, max= 3072, per=4.23%, avg=2316.80, stdev=185.26, samples=20 00:33:11.201 iops : min= 544, max= 768, avg=579.20, stdev=46.31, samples=20 00:33:11.201 lat (msec) : 2=1.10%, 4=0.28%, 20=1.02%, 50=97.61% 00:33:11.201 cpu : usr=98.39%, sys=1.14%, ctx=15, majf=0, minf=53 00:33:11.201 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:11.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.201 issued rwts: total=5808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.201 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.201 filename2: (groupid=0, jobs=1): err= 0: pid=2940717: Thu Nov 7 11:01:37 2024 00:33:11.201 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10020msec) 00:33:11.202 slat (nsec): min=4169, max=52048, avg=15030.94, stdev=4840.22 00:33:11.202 clat (usec): min=11588, max=46322, avg=27935.50, stdev=1517.48 00:33:11.202 lat (usec): min=11595, max=46335, avg=27950.53, stdev=1517.15 00:33:11.202 clat percentiles (usec): 00:33:11.202 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:11.202 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.202 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:11.202 | 99.00th=[28443], 99.50th=[29230], 99.90th=[46400], 99.95th=[46400], 00:33:11.202 | 99.99th=[46400] 00:33:11.202 bw ( KiB/s): min= 2176, max= 2304, per=4.17%, avg=2278.40, stdev=52.53, samples=20 00:33:11.202 iops : min= 544, max= 576, avg=569.60, stdev=13.13, samples=20 00:33:11.202 lat (msec) : 20=0.84%, 50=99.16% 00:33:11.202 cpu : usr=98.47%, sys=1.13%, ctx=30, majf=0, minf=36 00:33:11.202 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.202 filename2: (groupid=0, jobs=1): err= 0: pid=2940718: Thu Nov 7 11:01:37 2024 00:33:11.202 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:11.202 slat (nsec): min=6467, max=75808, avg=23382.40, stdev=10839.27 00:33:11.202 clat (usec): min=7771, max=49418, avg=27881.72, stdev=1616.22 00:33:11.202 lat (usec): min=7778, max=49436, avg=27905.10, stdev=1616.30 00:33:11.202 clat percentiles (usec): 00:33:11.202 | 1.00th=[26870], 5.00th=[27657], 10.00th=[27657], 20.00th=[27657], 00:33:11.202 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.202 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.202 | 99.00th=[28705], 99.50th=[28967], 99.90th=[49546], 99.95th=[49546], 00:33:11.202 | 99.99th=[49546] 00:33:11.202 bw ( KiB/s): min= 2052, max= 2416, per=4.16%, avg=2277.80, stdev=76.60, samples=20 00:33:11.202 iops : min= 513, max= 604, avg=569.45, stdev=19.15, samples=20 00:33:11.202 lat (msec) : 10=0.04%, 20=0.63%, 50=99.33% 00:33:11.202 cpu : usr=98.71%, sys=0.91%, ctx=15, majf=0, minf=46 00:33:11.202 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:33:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.202 filename2: (groupid=0, jobs=1): err= 0: pid=2940719: Thu Nov 7 11:01:37 2024 00:33:11.202 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:11.202 slat (nsec): min=7200, max=90607, avg=28203.36, stdev=15364.50 00:33:11.202 clat (usec): min=11639, max=48994, avg=27837.73, stdev=1522.73 00:33:11.202 lat (usec): min=11652, max=49011, avg=27865.94, stdev=1522.17 00:33:11.202 clat percentiles (usec): 00:33:11.202 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:33:11.202 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:11.202 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:11.202 | 99.00th=[28443], 99.50th=[28967], 99.90th=[49021], 99.95th=[49021], 00:33:11.202 | 99.99th=[49021] 00:33:11.202 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2270.53, stdev=71.25, samples=19 00:33:11.202 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:33:11.202 lat (msec) : 20=0.56%, 50=99.44% 00:33:11.202 cpu : usr=98.62%, sys=1.00%, ctx=5, majf=0, minf=28 00:33:11.202 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:11.202 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:11.202 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:11.202 00:33:11.202 Run status group 0 (all jobs): 00:33:11.202 READ: bw=53.4MiB/s (56.0MB/s), 2277KiB/s-2318KiB/s (2331kB/s-2374kB/s), io=535MiB (561MB), run=10002-10021msec 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 bdev_null0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 [2024-11-07 11:01:37.778218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.202 bdev_null1 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:11.202 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.203 { 00:33:11.203 "params": { 00:33:11.203 "name": "Nvme$subsystem", 00:33:11.203 "trtype": "$TEST_TRANSPORT", 00:33:11.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.203 "adrfam": "ipv4", 00:33:11.203 "trsvcid": "$NVMF_PORT", 00:33:11.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.203 "hdgst": ${hdgst:-false}, 00:33:11.203 "ddgst": ${ddgst:-false} 00:33:11.203 }, 00:33:11.203 "method": "bdev_nvme_attach_controller" 00:33:11.203 } 00:33:11.203 EOF 00:33:11.203 )") 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.203 { 00:33:11.203 "params": { 00:33:11.203 "name": "Nvme$subsystem", 00:33:11.203 "trtype": "$TEST_TRANSPORT", 00:33:11.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.203 "adrfam": "ipv4", 00:33:11.203 "trsvcid": "$NVMF_PORT", 00:33:11.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.203 "hdgst": ${hdgst:-false}, 00:33:11.203 "ddgst": ${ddgst:-false} 00:33:11.203 }, 00:33:11.203 "method": "bdev_nvme_attach_controller" 00:33:11.203 } 00:33:11.203 EOF 00:33:11.203 )") 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.203 "params": { 00:33:11.203 "name": "Nvme0", 00:33:11.203 "trtype": "tcp", 00:33:11.203 "traddr": "10.0.0.2", 00:33:11.203 "adrfam": "ipv4", 00:33:11.203 "trsvcid": "4420", 00:33:11.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.203 "hdgst": false, 00:33:11.203 "ddgst": false 00:33:11.203 }, 00:33:11.203 "method": "bdev_nvme_attach_controller" 00:33:11.203 },{ 00:33:11.203 "params": { 00:33:11.203 "name": "Nvme1", 00:33:11.203 "trtype": "tcp", 00:33:11.203 "traddr": "10.0.0.2", 00:33:11.203 "adrfam": "ipv4", 00:33:11.203 "trsvcid": "4420", 00:33:11.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.203 "hdgst": false, 00:33:11.203 "ddgst": false 00:33:11.203 }, 00:33:11.203 "method": "bdev_nvme_attach_controller" 00:33:11.203 }' 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.203 11:01:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.203 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:11.203 ... 00:33:11.203 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:11.203 ... 00:33:11.203 fio-3.35 00:33:11.203 Starting 4 threads 00:33:16.546 00:33:16.546 filename0: (groupid=0, jobs=1): err= 0: pid=2942663: Thu Nov 7 11:01:43 2024 00:33:16.546 read: IOPS=2827, BW=22.1MiB/s (23.2MB/s)(111MiB/5004msec) 00:33:16.546 slat (nsec): min=4112, max=30042, avg=9038.32, stdev=3128.79 00:33:16.546 clat (usec): min=771, max=5452, avg=2800.72, stdev=510.45 00:33:16.546 lat (usec): min=780, max=5458, avg=2809.76, stdev=510.34 00:33:16.546 clat percentiles (usec): 00:33:16.546 | 1.00th=[ 1565], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2442], 00:33:16.546 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:33:16.546 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3752], 00:33:16.546 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 4948], 99.95th=[ 5080], 00:33:16.546 | 99.99th=[ 5342] 00:33:16.546 bw ( KiB/s): min=20672, max=24768, per=27.07%, avg=22622.80, stdev=1491.32, samples=10 00:33:16.546 iops : min= 2584, max= 3096, avg=2827.80, stdev=186.45, samples=10 00:33:16.546 lat (usec) : 1000=0.62% 00:33:16.546 lat (msec) : 2=1.86%, 4=93.90%, 10=3.62% 00:33:16.546 cpu : usr=95.22%, sys=4.42%, ctx=13, majf=0, minf=9 00:33:16.546 IO depths : 1=0.4%, 2=8.2%, 4=63.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:16.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.546 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.546 issued rwts: total=14148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:16.546 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:16.546 filename0: (groupid=0, jobs=1): err= 0: pid=2942664: Thu Nov 7 11:01:43 2024 00:33:16.546 read: IOPS=2514, BW=19.6MiB/s (20.6MB/s)(98.2MiB/5001msec) 00:33:16.546 slat (nsec): min=6118, max=36429, avg=9105.67, stdev=3258.62 00:33:16.546 clat (usec): min=924, max=5580, avg=3156.21, stdev=486.79 00:33:16.546 lat (usec): min=931, max=5591, avg=3165.32, stdev=486.47 00:33:16.546 clat percentiles (usec): 00:33:16.546 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2737], 20.00th=[ 2835], 00:33:16.546 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:33:16.546 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3752], 95.00th=[ 4178], 00:33:16.546 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5342], 00:33:16.546 | 99.99th=[ 5604] 00:33:16.546 bw ( KiB/s): min=18896, max=21024, per=23.96%, avg=20023.11, stdev=747.48, samples=9 00:33:16.546 iops : min= 2362, max= 2628, avg=2502.89, stdev=93.44, samples=9 00:33:16.546 lat (usec) : 1000=0.02% 00:33:16.546 lat (msec) : 2=0.41%, 4=93.34%, 10=6.23% 00:33:16.546 cpu : usr=96.30%, sys=3.36%, ctx=8, majf=0, minf=9 00:33:16.546 IO depths : 1=0.1%, 2=2.6%, 4=67.0%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:16.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.546 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.546 issued rwts: total=12575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:16.546 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:16.546 filename1: (groupid=0, jobs=1): err= 0: pid=2942665: Thu Nov 7 11:01:43 2024 00:33:16.546 read: IOPS=2650, BW=20.7MiB/s (21.7MB/s)(104MiB/5002msec) 00:33:16.546 slat (nsec): min=6116, max=49154, avg=9120.70, stdev=3281.09 00:33:16.546 clat (usec): min=1049, max=5376, avg=2989.95, stdev=507.92 00:33:16.546 lat (usec): min=1057, max=5383, avg=2999.07, stdev=507.67 00:33:16.546 clat percentiles (usec): 00:33:16.546 | 1.00th=[ 1975], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2606], 00:33:16.546 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3032], 00:33:16.546 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3654], 95.00th=[ 4080], 00:33:16.546 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5014], 99.95th=[ 5080], 00:33:16.546 | 99.99th=[ 5342] 00:33:16.546 bw ( KiB/s): min=20144, max=22880, per=25.38%, avg=21206.40, stdev=940.31, samples=10 00:33:16.546 iops : min= 2518, max= 2860, avg=2650.80, stdev=117.54, samples=10 00:33:16.546 lat (msec) : 2=1.13%, 4=93.54%, 10=5.32% 00:33:16.547 cpu : usr=95.26%, sys=4.40%, ctx=13, majf=0, minf=9 00:33:16.547 IO depths : 1=0.2%, 2=5.9%, 4=66.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:16.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.547 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.547 issued rwts: total=13259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:16.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:16.547 filename1: (groupid=0, jobs=1): err= 0: pid=2942666: Thu Nov 7 11:01:43 2024 00:33:16.547 read: IOPS=2456, BW=19.2MiB/s (20.1MB/s)(96.0MiB/5001msec) 00:33:16.547 slat (nsec): min=6160, max=39928, avg=8950.60, stdev=3266.74 00:33:16.547 clat (usec): min=1052, max=5719, avg=3231.29, stdev=534.58 00:33:16.547 lat (usec): min=1063, max=5726, avg=3240.24, stdev=534.11 00:33:16.547 clat percentiles (usec): 00:33:16.547 | 1.00th=[ 2311], 5.00th=[ 2704], 10.00th=[ 2802], 20.00th=[ 2868], 00:33:16.547 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3130], 00:33:16.547 | 70.00th=[ 3294], 80.00th=[ 3490], 90.00th=[ 3851], 95.00th=[ 4490], 00:33:16.547 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5604], 00:33:16.547 | 99.99th=[ 5735] 00:33:16.547 bw ( KiB/s): min=18320, max=21328, per=23.51%, avg=19644.00, stdev=945.21, samples=10 00:33:16.547 iops : min= 2290, max= 2666, avg=2455.50, stdev=118.15, samples=10 00:33:16.547 lat (msec) : 2=0.28%, 4=90.95%, 10=8.76% 00:33:16.547 cpu : usr=95.72%, sys=3.98%, ctx=6, majf=0, minf=9 00:33:16.547 IO depths : 1=0.5%, 2=2.5%, 4=68.5%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:16.547 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.547 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:16.547 issued rwts: total=12283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:16.547 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:16.547 00:33:16.547 Run status group 0 (all jobs): 00:33:16.547 READ: bw=81.6MiB/s (85.6MB/s), 19.2MiB/s-22.1MiB/s (20.1MB/s-23.2MB/s), io=408MiB (428MB), run=5001-5004msec 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 00:33:16.547 real 0m24.452s 00:33:16.547 user 4m50.452s 00:33:16.547 sys 0m4.902s 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 ************************************ 00:33:16.547 END TEST fio_dif_rand_params 00:33:16.547 ************************************ 00:33:16.547 11:01:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:16.547 11:01:44 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:16.547 11:01:44 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 ************************************ 00:33:16.547 START TEST fio_dif_digest 00:33:16.547 ************************************ 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 bdev_null0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:16.547 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:16.547 [2024-11-07 11:01:44.212599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.805 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:16.805 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:16.806 { 00:33:16.806 "params": { 00:33:16.806 "name": "Nvme$subsystem", 00:33:16.806 "trtype": "$TEST_TRANSPORT", 00:33:16.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:16.806 "adrfam": "ipv4", 00:33:16.806 "trsvcid": "$NVMF_PORT", 00:33:16.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:16.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:16.806 "hdgst": ${hdgst:-false}, 00:33:16.806 "ddgst": ${ddgst:-false} 00:33:16.806 }, 00:33:16.806 "method": "bdev_nvme_attach_controller" 00:33:16.806 } 00:33:16.806 EOF 00:33:16.806 )") 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:16.806 "params": { 00:33:16.806 "name": "Nvme0", 00:33:16.806 "trtype": "tcp", 00:33:16.806 "traddr": "10.0.0.2", 00:33:16.806 "adrfam": "ipv4", 00:33:16.806 "trsvcid": "4420", 00:33:16.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:16.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:16.806 "hdgst": true, 00:33:16.806 "ddgst": true 00:33:16.806 }, 00:33:16.806 "method": "bdev_nvme_attach_controller" 00:33:16.806 }' 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:16.806 11:01:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:17.064 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:17.064 ... 00:33:17.064 fio-3.35 00:33:17.064 Starting 3 threads 00:33:29.259 00:33:29.259 filename0: (groupid=0, jobs=1): err= 0: pid=2943749: Thu Nov 7 11:01:55 2024 00:33:29.259 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(365MiB/10046msec) 00:33:29.259 slat (nsec): min=6373, max=47132, avg=11904.31, stdev=2146.20 00:33:29.259 clat (usec): min=7772, max=48064, avg=10289.35, stdev=1224.98 00:33:29.259 lat (usec): min=7784, max=48078, avg=10301.25, stdev=1224.91 00:33:29.259 clat percentiles (usec): 00:33:29.259 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:33:29.259 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:33:29.259 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:33:29.259 | 99.00th=[12125], 99.50th=[12518], 99.90th=[13960], 99.95th=[46924], 00:33:29.259 | 99.99th=[47973] 00:33:29.259 bw ( KiB/s): min=34560, max=38656, per=34.69%, avg=37363.20, stdev=945.09, samples=20 00:33:29.259 iops : min= 270, max= 302, avg=291.90, stdev= 7.38, samples=20 00:33:29.259 lat (msec) : 10=36.49%, 20=63.44%, 50=0.07% 00:33:29.259 cpu : usr=94.80%, sys=4.87%, ctx=30, majf=0, minf=27 00:33:29.259 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:29.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 issued rwts: total=2921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:29.259 filename0: (groupid=0, jobs=1): err= 0: pid=2943750: Thu Nov 7 11:01:55 2024 00:33:29.259 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(359MiB/10045msec) 00:33:29.259 slat (nsec): min=6482, max=45377, avg=14296.55, stdev=2706.45 00:33:29.259 clat (usec): min=7262, max=48532, avg=10475.53, stdev=1257.64 00:33:29.259 lat (usec): min=7279, max=48547, avg=10489.83, stdev=1257.55 00:33:29.259 clat percentiles (usec): 00:33:29.259 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:33:29.259 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:33:29.259 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:33:29.259 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13435], 99.95th=[46400], 00:33:29.259 | 99.99th=[48497] 00:33:29.259 bw ( KiB/s): min=34304, max=38912, per=34.06%, avg=36684.80, stdev=1169.02, samples=20 00:33:29.259 iops : min= 268, max= 304, avg=286.60, stdev= 9.13, samples=20 00:33:29.259 lat (msec) : 10=28.70%, 20=71.23%, 50=0.07% 00:33:29.259 cpu : usr=90.83%, sys=6.07%, ctx=718, majf=0, minf=38 00:33:29.259 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:29.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 issued rwts: total=2868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:29.259 filename0: (groupid=0, jobs=1): err= 0: pid=2943751: Thu Nov 7 11:01:55 2024 00:33:29.259 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(333MiB/10044msec) 00:33:29.259 slat (nsec): min=6490, max=59713, avg=11977.46, stdev=2310.08 00:33:29.259 clat (usec): min=8536, max=51955, avg=11281.12, stdev=1375.66 00:33:29.259 lat (usec): min=8548, max=51968, avg=11293.10, stdev=1375.70 00:33:29.259 clat percentiles (usec): 00:33:29.259 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:33:29.259 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:33:29.259 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:33:29.259 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14746], 99.95th=[47973], 00:33:29.259 | 99.99th=[52167] 00:33:29.259 bw ( KiB/s): min=32000, max=35584, per=31.64%, avg=34073.60, stdev=866.75, samples=20 00:33:29.259 iops : min= 250, max= 278, avg=266.20, stdev= 6.77, samples=20 00:33:29.259 lat (msec) : 10=6.68%, 20=93.24%, 50=0.04%, 100=0.04% 00:33:29.259 cpu : usr=95.43%, sys=4.23%, ctx=26, majf=0, minf=35 00:33:29.259 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:29.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.259 issued rwts: total=2664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:29.259 00:33:29.259 Run status group 0 (all jobs): 00:33:29.259 READ: bw=105MiB/s (110MB/s), 33.2MiB/s-36.3MiB/s (34.8MB/s-38.1MB/s), io=1057MiB (1108MB), run=10044-10046msec 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:29.260 00:33:29.260 real 0m11.107s 00:33:29.260 user 0m34.994s 00:33:29.260 sys 0m1.805s 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:29.260 11:01:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:29.260 ************************************ 00:33:29.260 END TEST fio_dif_digest 00:33:29.260 ************************************ 00:33:29.260 11:01:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:29.260 11:01:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:29.260 rmmod nvme_tcp 00:33:29.260 rmmod nvme_fabrics 00:33:29.260 rmmod nvme_keyring 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2935338 ']' 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2935338 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 2935338 ']' 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 2935338 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2935338 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2935338' 00:33:29.260 killing process with pid 2935338 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@971 -- # kill 2935338 00:33:29.260 11:01:55 nvmf_dif -- common/autotest_common.sh@976 -- # wait 2935338 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:33:29.260 11:01:55 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:30.195 Waiting for block devices as requested 00:33:30.453 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:30.453 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:30.453 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:30.711 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:30.711 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:30.711 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:30.711 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:30.969 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:30.969 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:30.969 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:30.969 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:31.228 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:31.228 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:31.228 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:31.486 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:31.486 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:31.486 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:31.486 11:01:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:33:31.745 11:01:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:31.745 11:01:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:31.745 11:01:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.745 11:01:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:31.745 11:01:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.647 11:02:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:33.647 00:33:33.647 real 1m12.921s 00:33:33.647 user 7m6.711s 00:33:33.647 sys 0m19.664s 00:33:33.647 11:02:01 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:33.647 11:02:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.647 ************************************ 00:33:33.647 END TEST nvmf_dif 00:33:33.647 ************************************ 00:33:33.647 11:02:01 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:33.647 11:02:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:33.647 11:02:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:33.647 11:02:01 -- common/autotest_common.sh@10 -- # set +x 00:33:33.647 ************************************ 00:33:33.647 START TEST nvmf_abort_qd_sizes 00:33:33.647 ************************************ 00:33:33.647 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:33.907 * Looking for test storage... 00:33:33.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.907 --rc genhtml_branch_coverage=1 00:33:33.907 --rc genhtml_function_coverage=1 00:33:33.907 --rc genhtml_legend=1 00:33:33.907 --rc geninfo_all_blocks=1 00:33:33.907 --rc geninfo_unexecuted_blocks=1 00:33:33.907 00:33:33.907 ' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.907 --rc genhtml_branch_coverage=1 00:33:33.907 --rc genhtml_function_coverage=1 00:33:33.907 --rc genhtml_legend=1 00:33:33.907 --rc geninfo_all_blocks=1 00:33:33.907 --rc geninfo_unexecuted_blocks=1 00:33:33.907 00:33:33.907 ' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.907 --rc genhtml_branch_coverage=1 00:33:33.907 --rc genhtml_function_coverage=1 00:33:33.907 --rc genhtml_legend=1 00:33:33.907 --rc geninfo_all_blocks=1 00:33:33.907 --rc geninfo_unexecuted_blocks=1 00:33:33.907 00:33:33.907 ' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:33.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.907 --rc genhtml_branch_coverage=1 00:33:33.907 --rc genhtml_function_coverage=1 00:33:33.907 --rc genhtml_legend=1 00:33:33.907 --rc geninfo_all_blocks=1 00:33:33.907 --rc geninfo_unexecuted_blocks=1 00:33:33.907 00:33:33.907 ' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.907 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:33.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:33:33.908 11:02:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:39.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:39.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:39.176 Found net devices under 0000:86:00.0: cvl_0_0 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:39.176 Found net devices under 0000:86:00.1: cvl_0_1 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:39.176 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:39.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:39.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:33:39.177 00:33:39.177 --- 10.0.0.2 ping statistics --- 00:33:39.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.177 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:39.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:39.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:33:39.177 00:33:39.177 --- 10.0.0.1 ping statistics --- 00:33:39.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:39.177 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:39.177 11:02:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:41.074 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:41.074 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:41.074 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:41.074 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:41.074 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:41.074 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:41.332 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:42.267 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:42.267 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2951511 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2951511 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 2951511 ']' 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:42.268 11:02:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:42.268 [2024-11-07 11:02:09.862834] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:33:42.268 [2024-11-07 11:02:09.862881] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.268 [2024-11-07 11:02:09.930161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:42.526 [2024-11-07 11:02:09.974787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.526 [2024-11-07 11:02:09.974827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.526 [2024-11-07 11:02:09.974834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.526 [2024-11-07 11:02:09.974841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.526 [2024-11-07 11:02:09.974846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.526 [2024-11-07 11:02:09.976381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.526 [2024-11-07 11:02:09.976470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:42.526 [2024-11-07 11:02:09.976526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:42.526 [2024-11-07 11:02:09.976528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:42.526 11:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:42.526 ************************************ 00:33:42.526 START TEST spdk_target_abort 00:33:42.526 ************************************ 00:33:42.526 11:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:33:42.526 11:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:42.526 11:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:42.526 11:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.526 11:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 spdk_targetn1 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 [2024-11-07 11:02:12.989163] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.811 11:02:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:45.811 [2024-11-07 11:02:13.045867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:45.811 11:02:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:49.098 Initializing NVMe Controllers 00:33:49.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:49.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:49.098 Initialization complete. Launching workers. 00:33:49.098 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16195, failed: 0 00:33:49.098 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1325, failed to submit 14870 00:33:49.098 success 755, unsuccessful 570, failed 0 00:33:49.098 11:02:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:49.098 11:02:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.386 Initializing NVMe Controllers 00:33:52.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:52.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:52.386 Initialization complete. Launching workers. 00:33:52.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8575, failed: 0 00:33:52.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1280, failed to submit 7295 00:33:52.386 success 284, unsuccessful 996, failed 0 00:33:52.386 11:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:52.386 11:02:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:55.675 Initializing NVMe Controllers 00:33:55.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:55.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:55.675 Initialization complete. Launching workers. 00:33:55.675 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38111, failed: 0 00:33:55.675 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2798, failed to submit 35313 00:33:55.675 success 599, unsuccessful 2199, failed 0 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:55.675 11:02:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2951511 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 2951511 ']' 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 2951511 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2951511 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2951511' 00:33:56.611 killing process with pid 2951511 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 2951511 00:33:56.611 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 2951511 00:33:56.870 00:33:56.870 real 0m14.274s 00:33:56.870 user 0m54.329s 00:33:56.870 sys 0m2.613s 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:56.870 ************************************ 00:33:56.870 END TEST spdk_target_abort 00:33:56.870 ************************************ 00:33:56.870 11:02:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:56.870 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:56.870 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:56.870 11:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:56.870 ************************************ 00:33:56.870 START TEST kernel_target_abort 00:33:56.870 ************************************ 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:56.870 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:56.871 11:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:59.407 Waiting for block devices as requested 00:33:59.407 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:59.407 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:59.665 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:59.665 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:59.665 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:59.924 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:59.924 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:59.924 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:59.924 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:00.183 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:00.183 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:00.183 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:00.442 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:00.442 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:00.442 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:00.442 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:00.701 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:00.701 No valid GPT data, bailing 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:00.701 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:00.960 00:34:00.960 Discovery Log Number of Records 2, Generation counter 2 00:34:00.960 =====Discovery Log Entry 0====== 00:34:00.960 trtype: tcp 00:34:00.960 adrfam: ipv4 00:34:00.960 subtype: current discovery subsystem 00:34:00.960 treq: not specified, sq flow control disable supported 00:34:00.960 portid: 1 00:34:00.960 trsvcid: 4420 00:34:00.960 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:00.960 traddr: 10.0.0.1 00:34:00.960 eflags: none 00:34:00.960 sectype: none 00:34:00.960 =====Discovery Log Entry 1====== 00:34:00.960 trtype: tcp 00:34:00.960 adrfam: ipv4 00:34:00.960 subtype: nvme subsystem 00:34:00.960 treq: not specified, sq flow control disable supported 00:34:00.960 portid: 1 00:34:00.960 trsvcid: 4420 00:34:00.960 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:00.960 traddr: 10.0.0.1 00:34:00.960 eflags: none 00:34:00.960 sectype: none 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:00.960 11:02:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:04.248 Initializing NVMe Controllers 00:34:04.248 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:04.248 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:04.248 Initialization complete. Launching workers. 00:34:04.248 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90702, failed: 0 00:34:04.248 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90702, failed to submit 0 00:34:04.248 success 0, unsuccessful 90702, failed 0 00:34:04.248 11:02:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:04.248 11:02:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:07.535 Initializing NVMe Controllers 00:34:07.535 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:07.535 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:07.535 Initialization complete. Launching workers. 00:34:07.535 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143703, failed: 0 00:34:07.535 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36058, failed to submit 107645 00:34:07.535 success 0, unsuccessful 36058, failed 0 00:34:07.535 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:07.535 11:02:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:10.071 Initializing NVMe Controllers 00:34:10.071 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:10.071 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:10.071 Initialization complete. Launching workers. 00:34:10.071 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136053, failed: 0 00:34:10.071 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34074, failed to submit 101979 00:34:10.071 success 0, unsuccessful 34074, failed 0 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:10.071 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:10.072 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:10.072 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:10.332 11:02:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:12.868 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:12.868 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:12.869 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:13.127 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:13.127 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:13.696 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:13.956 00:34:13.956 real 0m16.949s 00:34:13.956 user 0m8.872s 00:34:13.956 sys 0m4.721s 00:34:13.956 11:02:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:13.956 11:02:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:13.956 ************************************ 00:34:13.956 END TEST kernel_target_abort 00:34:13.956 ************************************ 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.956 rmmod nvme_tcp 00:34:13.956 rmmod nvme_fabrics 00:34:13.956 rmmod nvme_keyring 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2951511 ']' 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2951511 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 2951511 ']' 00:34:13.956 11:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 2951511 00:34:13.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2951511) - No such process 00:34:13.957 11:02:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 2951511 is not found' 00:34:13.957 Process with pid 2951511 is not found 00:34:13.957 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:13.957 11:02:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:16.634 Waiting for block devices as requested 00:34:16.634 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:16.634 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:16.634 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:16.893 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:16.893 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:16.893 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:16.893 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:17.151 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:17.151 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:17.151 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:17.151 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:17.408 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:17.408 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:17.409 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:17.409 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:17.666 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:17.666 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.666 11:02:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:20.200 11:02:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:20.200 00:34:20.200 real 0m46.085s 00:34:20.200 user 1m6.765s 00:34:20.200 sys 0m14.974s 00:34:20.200 11:02:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:20.200 11:02:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:20.200 ************************************ 00:34:20.200 END TEST nvmf_abort_qd_sizes 00:34:20.200 ************************************ 00:34:20.200 11:02:47 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:20.200 11:02:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:20.200 11:02:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:20.200 11:02:47 -- common/autotest_common.sh@10 -- # set +x 00:34:20.200 ************************************ 00:34:20.200 START TEST keyring_file 00:34:20.200 ************************************ 00:34:20.200 11:02:47 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:34:20.200 * Looking for test storage... 00:34:20.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:20.200 11:02:47 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:20.200 11:02:47 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:34:20.200 11:02:47 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:20.200 11:02:47 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:34:20.200 11:02:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.201 --rc genhtml_branch_coverage=1 00:34:20.201 --rc genhtml_function_coverage=1 00:34:20.201 --rc genhtml_legend=1 00:34:20.201 --rc geninfo_all_blocks=1 00:34:20.201 --rc geninfo_unexecuted_blocks=1 00:34:20.201 00:34:20.201 ' 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.201 --rc genhtml_branch_coverage=1 00:34:20.201 --rc genhtml_function_coverage=1 00:34:20.201 --rc genhtml_legend=1 00:34:20.201 --rc geninfo_all_blocks=1 00:34:20.201 --rc geninfo_unexecuted_blocks=1 00:34:20.201 00:34:20.201 ' 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.201 --rc genhtml_branch_coverage=1 00:34:20.201 --rc genhtml_function_coverage=1 00:34:20.201 --rc genhtml_legend=1 00:34:20.201 --rc geninfo_all_blocks=1 00:34:20.201 --rc geninfo_unexecuted_blocks=1 00:34:20.201 00:34:20.201 ' 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:20.201 --rc genhtml_branch_coverage=1 00:34:20.201 --rc genhtml_function_coverage=1 00:34:20.201 --rc genhtml_legend=1 00:34:20.201 --rc geninfo_all_blocks=1 00:34:20.201 --rc geninfo_unexecuted_blocks=1 00:34:20.201 00:34:20.201 ' 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.201 11:02:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.201 11:02:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.201 11:02:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.201 11:02:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.201 11:02:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:34:20.201 11:02:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:20.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6eFrmjJVeg 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6eFrmjJVeg 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6eFrmjJVeg 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6eFrmjJVeg 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VcdiNo7w4h 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:20.201 11:02:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VcdiNo7w4h 00:34:20.201 11:02:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VcdiNo7w4h 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VcdiNo7w4h 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=2960126 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2960126 00:34:20.201 11:02:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2960126 ']' 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.201 11:02:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:20.201 [2024-11-07 11:02:47.808275] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:34:20.201 [2024-11-07 11:02:47.808327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960126 ] 00:34:20.460 [2024-11-07 11:02:47.871646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.460 [2024-11-07 11:02:47.912287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.460 11:02:48 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.460 11:02:48 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:20.718 11:02:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:34:20.718 11:02:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.718 11:02:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:20.718 [2024-11-07 11:02:48.132189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.718 null0 00:34:20.719 [2024-11-07 11:02:48.164246] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:20.719 [2024-11-07 11:02:48.164626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.719 11:02:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:20.719 [2024-11-07 11:02:48.192312] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:34:20.719 request: 00:34:20.719 { 00:34:20.719 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:34:20.719 "secure_channel": false, 00:34:20.719 "listen_address": { 00:34:20.719 "trtype": "tcp", 00:34:20.719 "traddr": "127.0.0.1", 00:34:20.719 "trsvcid": "4420" 00:34:20.719 }, 00:34:20.719 "method": "nvmf_subsystem_add_listener", 00:34:20.719 "req_id": 1 00:34:20.719 } 00:34:20.719 Got JSON-RPC error response 00:34:20.719 response: 00:34:20.719 { 00:34:20.719 "code": -32602, 00:34:20.719 "message": "Invalid parameters" 00:34:20.719 } 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:20.719 11:02:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=2960301 00:34:20.719 11:02:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2960301 /var/tmp/bperf.sock 00:34:20.719 11:02:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2960301 ']' 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:20.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:20.719 11:02:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:20.719 [2024-11-07 11:02:48.247343] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:34:20.719 [2024-11-07 11:02:48.247384] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960301 ] 00:34:20.719 [2024-11-07 11:02:48.308618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.719 [2024-11-07 11:02:48.349234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:20.977 11:02:48 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:20.977 11:02:48 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:20.977 11:02:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:20.977 11:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:21.235 11:02:48 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VcdiNo7w4h 00:34:21.235 11:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VcdiNo7w4h 00:34:21.235 11:02:48 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:34:21.235 11:02:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:34:21.235 11:02:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.235 11:02:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:21.235 11:02:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.493 11:02:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6eFrmjJVeg == \/\t\m\p\/\t\m\p\.\6\e\F\r\m\j\J\V\e\g ]] 00:34:21.493 11:02:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:34:21.493 11:02:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:34:21.493 11:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.493 11:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:21.493 11:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:21.751 11:02:49 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.VcdiNo7w4h == \/\t\m\p\/\t\m\p\.\V\c\d\i\N\o\7\w\4\h ]] 00:34:21.751 11:02:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:34:21.751 11:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:21.751 11:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:21.751 11:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:21.751 11:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:21.751 11:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.009 11:02:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:34:22.009 11:02:49 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.009 11:02:49 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:34:22.009 11:02:49 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:22.009 11:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:22.266 [2024-11-07 11:02:49.828274] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:22.266 nvme0n1 00:34:22.266 11:02:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:34:22.266 11:02:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:22.266 11:02:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.266 11:02:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.266 11:02:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:22.266 11:02:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.523 11:02:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:34:22.523 11:02:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:34:22.523 11:02:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:22.523 11:02:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:22.523 11:02:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:22.523 11:02:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:22.523 11:02:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:22.781 11:02:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:34:22.781 11:02:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:22.781 Running I/O for 1 seconds... 00:34:24.153 18174.00 IOPS, 70.99 MiB/s 00:34:24.153 Latency(us) 00:34:24.153 [2024-11-07T10:02:51.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.153 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:34:24.153 nvme0n1 : 1.00 18221.15 71.18 0.00 0.00 7011.99 4388.06 12822.26 00:34:24.153 [2024-11-07T10:02:51.824Z] =================================================================================================================== 00:34:24.153 [2024-11-07T10:02:51.824Z] Total : 18221.15 71.18 0.00 0.00 7011.99 4388.06 12822.26 00:34:24.153 { 00:34:24.153 "results": [ 00:34:24.153 { 00:34:24.153 "job": "nvme0n1", 00:34:24.153 "core_mask": "0x2", 00:34:24.153 "workload": "randrw", 00:34:24.153 "percentage": 50, 00:34:24.153 "status": "finished", 00:34:24.153 "queue_depth": 128, 00:34:24.153 "io_size": 4096, 00:34:24.154 "runtime": 1.004437, 00:34:24.154 "iops": 18221.15274526924, 00:34:24.154 "mibps": 71.17637791120796, 00:34:24.154 "io_failed": 0, 00:34:24.154 "io_timeout": 0, 00:34:24.154 "avg_latency_us": 7011.989715355414, 00:34:24.154 "min_latency_us": 4388.062608695652, 00:34:24.154 "max_latency_us": 12822.260869565218 00:34:24.154 } 00:34:24.154 ], 00:34:24.154 "core_count": 1 00:34:24.154 } 00:34:24.154 11:02:51 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:24.154 11:02:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.154 11:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:24.412 11:02:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:34:24.412 11:02:51 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:34:24.412 11:02:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.412 11:02:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:24.412 11:02:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.412 11:02:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.412 11:02:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:24.412 11:02:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:34:24.412 11:02:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:24.412 11:02:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.412 11:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:34:24.670 [2024-11-07 11:02:52.226557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:24.670 [2024-11-07 11:02:52.226944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c61f0 (107): Transport endpoint is not connected 00:34:24.670 [2024-11-07 11:02:52.227939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c61f0 (9): Bad file descriptor 00:34:24.670 [2024-11-07 11:02:52.228941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:24.670 [2024-11-07 11:02:52.228950] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:24.670 [2024-11-07 11:02:52.228958] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:24.670 [2024-11-07 11:02:52.228967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:24.670 request: 00:34:24.670 { 00:34:24.670 "name": "nvme0", 00:34:24.670 "trtype": "tcp", 00:34:24.670 "traddr": "127.0.0.1", 00:34:24.670 "adrfam": "ipv4", 00:34:24.670 "trsvcid": "4420", 00:34:24.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.670 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.670 "prchk_reftag": false, 00:34:24.670 "prchk_guard": false, 00:34:24.670 "hdgst": false, 00:34:24.670 "ddgst": false, 00:34:24.670 "psk": "key1", 00:34:24.670 "allow_unrecognized_csi": false, 00:34:24.670 "method": "bdev_nvme_attach_controller", 00:34:24.670 "req_id": 1 00:34:24.670 } 00:34:24.670 Got JSON-RPC error response 00:34:24.670 response: 00:34:24.670 { 00:34:24.670 "code": -5, 00:34:24.670 "message": "Input/output error" 00:34:24.670 } 00:34:24.670 11:02:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:24.671 11:02:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:24.671 11:02:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:24.671 11:02:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:24.671 11:02:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:34:24.671 11:02:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:24.671 11:02:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.671 11:02:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.671 11:02:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:24.671 11:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:24.928 11:02:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:34:24.928 11:02:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:34:24.928 11:02:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:24.928 11:02:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:24.928 11:02:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:24.928 11:02:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:24.928 11:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.187 11:02:52 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:34:25.187 11:02:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:34:25.187 11:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:25.187 11:02:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:34:25.187 11:02:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:34:25.445 11:02:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:34:25.445 11:02:53 keyring_file -- keyring/file.sh@78 -- # jq length 00:34:25.445 11:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:25.703 11:02:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:34:25.703 11:02:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.6eFrmjJVeg 00:34:25.703 11:02:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:25.703 11:02:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.703 11:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.961 [2024-11-07 11:02:53.416006] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6eFrmjJVeg': 0100660 00:34:25.961 [2024-11-07 11:02:53.416032] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:34:25.961 request: 00:34:25.961 { 00:34:25.961 "name": "key0", 00:34:25.961 "path": "/tmp/tmp.6eFrmjJVeg", 00:34:25.961 "method": "keyring_file_add_key", 00:34:25.961 "req_id": 1 00:34:25.961 } 00:34:25.961 Got JSON-RPC error response 00:34:25.961 response: 00:34:25.961 { 00:34:25.961 "code": -1, 00:34:25.961 "message": "Operation not permitted" 00:34:25.961 } 00:34:25.961 11:02:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:25.961 11:02:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:25.961 11:02:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:25.961 11:02:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:25.961 11:02:53 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.6eFrmjJVeg 00:34:25.961 11:02:53 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.961 11:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6eFrmjJVeg 00:34:25.961 11:02:53 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.6eFrmjJVeg 00:34:25.961 11:02:53 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:26.220 11:02:53 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:34:26.220 11:02:53 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:26.220 11:02:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.220 11:02:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.478 [2024-11-07 11:02:54.013627] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6eFrmjJVeg': No such file or directory 00:34:26.478 [2024-11-07 11:02:54.013653] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:34:26.478 [2024-11-07 11:02:54.013669] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:34:26.478 [2024-11-07 11:02:54.013676] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:34:26.478 [2024-11-07 11:02:54.013683] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:26.478 [2024-11-07 11:02:54.013689] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:34:26.478 request: 00:34:26.478 { 00:34:26.478 "name": "nvme0", 00:34:26.478 "trtype": "tcp", 00:34:26.478 "traddr": "127.0.0.1", 00:34:26.478 "adrfam": "ipv4", 00:34:26.478 "trsvcid": "4420", 00:34:26.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:26.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:26.478 "prchk_reftag": false, 00:34:26.478 "prchk_guard": false, 00:34:26.478 "hdgst": false, 00:34:26.478 "ddgst": false, 00:34:26.478 "psk": "key0", 00:34:26.478 "allow_unrecognized_csi": false, 00:34:26.478 "method": "bdev_nvme_attach_controller", 00:34:26.478 "req_id": 1 00:34:26.478 } 00:34:26.478 Got JSON-RPC error response 00:34:26.478 response: 00:34:26.478 { 00:34:26.478 "code": -19, 00:34:26.478 "message": "No such device" 00:34:26.478 } 00:34:26.478 11:02:54 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:34:26.478 11:02:54 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:26.478 11:02:54 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:26.478 11:02:54 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:26.478 11:02:54 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:34:26.478 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:26.737 11:02:54 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nMXQaGoX6m 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:34:26.737 11:02:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nMXQaGoX6m 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nMXQaGoX6m 00:34:26.737 11:02:54 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nMXQaGoX6m 00:34:26.737 11:02:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nMXQaGoX6m 00:34:26.737 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nMXQaGoX6m 00:34:26.996 11:02:54 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:26.996 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:27.254 nvme0n1 00:34:27.254 11:02:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:34:27.254 11:02:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:27.254 11:02:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.254 11:02:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.254 11:02:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:27.254 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.513 11:02:54 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:34:27.513 11:02:54 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:34:27.513 11:02:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:34:27.513 11:02:55 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:34:27.513 11:02:55 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:34:27.513 11:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.513 11:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:27.513 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.771 11:02:55 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:34:27.771 11:02:55 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:34:27.771 11:02:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:27.771 11:02:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:27.771 11:02:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:27.771 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:27.771 11:02:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:28.029 11:02:55 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:34:28.029 11:02:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:28.030 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:28.288 11:02:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:34:28.288 11:02:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:34:28.288 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:28.288 11:02:55 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:34:28.288 11:02:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nMXQaGoX6m 00:34:28.288 11:02:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nMXQaGoX6m 00:34:28.546 11:02:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VcdiNo7w4h 00:34:28.546 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VcdiNo7w4h 00:34:28.805 11:02:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:28.805 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:34:29.063 nvme0n1 00:34:29.063 11:02:56 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:34:29.063 11:02:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:34:29.323 11:02:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:34:29.323 "subsystems": [ 00:34:29.323 { 00:34:29.323 "subsystem": "keyring", 00:34:29.323 "config": [ 00:34:29.323 { 00:34:29.323 "method": "keyring_file_add_key", 00:34:29.323 "params": { 00:34:29.323 "name": "key0", 00:34:29.323 "path": "/tmp/tmp.nMXQaGoX6m" 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "keyring_file_add_key", 00:34:29.323 "params": { 00:34:29.323 "name": "key1", 00:34:29.323 "path": "/tmp/tmp.VcdiNo7w4h" 00:34:29.323 } 00:34:29.323 } 00:34:29.323 ] 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "subsystem": "iobuf", 00:34:29.323 "config": [ 00:34:29.323 { 00:34:29.323 "method": "iobuf_set_options", 00:34:29.323 "params": { 00:34:29.323 "small_pool_count": 8192, 00:34:29.323 "large_pool_count": 1024, 00:34:29.323 "small_bufsize": 8192, 00:34:29.323 "large_bufsize": 135168, 00:34:29.323 "enable_numa": false 00:34:29.323 } 00:34:29.323 } 00:34:29.323 ] 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "subsystem": "sock", 00:34:29.323 "config": [ 00:34:29.323 { 00:34:29.323 "method": "sock_set_default_impl", 00:34:29.323 "params": { 00:34:29.323 "impl_name": "posix" 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "sock_impl_set_options", 00:34:29.323 "params": { 00:34:29.323 "impl_name": "ssl", 00:34:29.323 "recv_buf_size": 4096, 00:34:29.323 "send_buf_size": 4096, 00:34:29.323 "enable_recv_pipe": true, 00:34:29.323 "enable_quickack": false, 00:34:29.323 "enable_placement_id": 0, 00:34:29.323 "enable_zerocopy_send_server": true, 00:34:29.323 "enable_zerocopy_send_client": false, 00:34:29.323 "zerocopy_threshold": 0, 00:34:29.323 "tls_version": 0, 00:34:29.323 "enable_ktls": false 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "sock_impl_set_options", 00:34:29.323 "params": { 00:34:29.323 "impl_name": "posix", 00:34:29.323 "recv_buf_size": 2097152, 00:34:29.323 "send_buf_size": 2097152, 00:34:29.323 "enable_recv_pipe": true, 00:34:29.323 "enable_quickack": false, 00:34:29.323 "enable_placement_id": 0, 00:34:29.323 "enable_zerocopy_send_server": true, 00:34:29.323 "enable_zerocopy_send_client": false, 00:34:29.323 "zerocopy_threshold": 0, 00:34:29.323 "tls_version": 0, 00:34:29.323 "enable_ktls": false 00:34:29.323 } 00:34:29.323 } 00:34:29.323 ] 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "subsystem": "vmd", 00:34:29.323 "config": [] 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "subsystem": "accel", 00:34:29.323 "config": [ 00:34:29.323 { 00:34:29.323 "method": "accel_set_options", 00:34:29.323 "params": { 00:34:29.323 "small_cache_size": 128, 00:34:29.323 "large_cache_size": 16, 00:34:29.323 "task_count": 2048, 00:34:29.323 "sequence_count": 2048, 00:34:29.323 "buf_count": 2048 00:34:29.323 } 00:34:29.323 } 00:34:29.323 ] 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "subsystem": "bdev", 00:34:29.323 "config": [ 00:34:29.323 { 00:34:29.323 "method": "bdev_set_options", 00:34:29.323 "params": { 00:34:29.323 "bdev_io_pool_size": 65535, 00:34:29.323 "bdev_io_cache_size": 256, 00:34:29.323 "bdev_auto_examine": true, 00:34:29.323 "iobuf_small_cache_size": 128, 00:34:29.323 "iobuf_large_cache_size": 16 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "bdev_raid_set_options", 00:34:29.323 "params": { 00:34:29.323 "process_window_size_kb": 1024, 00:34:29.323 "process_max_bandwidth_mb_sec": 0 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "bdev_iscsi_set_options", 00:34:29.323 "params": { 00:34:29.323 "timeout_sec": 30 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "bdev_nvme_set_options", 00:34:29.323 "params": { 00:34:29.323 "action_on_timeout": "none", 00:34:29.323 "timeout_us": 0, 00:34:29.323 "timeout_admin_us": 0, 00:34:29.323 "keep_alive_timeout_ms": 10000, 00:34:29.323 "arbitration_burst": 0, 00:34:29.323 "low_priority_weight": 0, 00:34:29.323 "medium_priority_weight": 0, 00:34:29.323 "high_priority_weight": 0, 00:34:29.323 "nvme_adminq_poll_period_us": 10000, 00:34:29.323 "nvme_ioq_poll_period_us": 0, 00:34:29.323 "io_queue_requests": 512, 00:34:29.323 "delay_cmd_submit": true, 00:34:29.323 "transport_retry_count": 4, 00:34:29.323 "bdev_retry_count": 3, 00:34:29.323 "transport_ack_timeout": 0, 00:34:29.323 "ctrlr_loss_timeout_sec": 0, 00:34:29.323 "reconnect_delay_sec": 0, 00:34:29.323 "fast_io_fail_timeout_sec": 0, 00:34:29.323 "disable_auto_failback": false, 00:34:29.323 "generate_uuids": false, 00:34:29.323 "transport_tos": 0, 00:34:29.323 "nvme_error_stat": false, 00:34:29.323 "rdma_srq_size": 0, 00:34:29.323 "io_path_stat": false, 00:34:29.323 "allow_accel_sequence": false, 00:34:29.323 "rdma_max_cq_size": 0, 00:34:29.323 "rdma_cm_event_timeout_ms": 0, 00:34:29.323 "dhchap_digests": [ 00:34:29.323 "sha256", 00:34:29.323 "sha384", 00:34:29.323 "sha512" 00:34:29.323 ], 00:34:29.323 "dhchap_dhgroups": [ 00:34:29.323 "null", 00:34:29.323 "ffdhe2048", 00:34:29.323 "ffdhe3072", 00:34:29.323 "ffdhe4096", 00:34:29.323 "ffdhe6144", 00:34:29.323 "ffdhe8192" 00:34:29.323 ] 00:34:29.323 } 00:34:29.323 }, 00:34:29.323 { 00:34:29.323 "method": "bdev_nvme_attach_controller", 00:34:29.323 "params": { 00:34:29.323 "name": "nvme0", 00:34:29.323 "trtype": "TCP", 00:34:29.323 "adrfam": "IPv4", 00:34:29.324 "traddr": "127.0.0.1", 00:34:29.324 "trsvcid": "4420", 00:34:29.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.324 "prchk_reftag": false, 00:34:29.324 "prchk_guard": false, 00:34:29.324 "ctrlr_loss_timeout_sec": 0, 00:34:29.324 "reconnect_delay_sec": 0, 00:34:29.324 "fast_io_fail_timeout_sec": 0, 00:34:29.324 "psk": "key0", 00:34:29.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.324 "hdgst": false, 00:34:29.324 "ddgst": false, 00:34:29.324 "multipath": "multipath" 00:34:29.324 } 00:34:29.324 }, 00:34:29.324 { 00:34:29.324 "method": "bdev_nvme_set_hotplug", 00:34:29.324 "params": { 00:34:29.324 "period_us": 100000, 00:34:29.324 "enable": false 00:34:29.324 } 00:34:29.324 }, 00:34:29.324 { 00:34:29.324 "method": "bdev_wait_for_examine" 00:34:29.324 } 00:34:29.324 ] 00:34:29.324 }, 00:34:29.324 { 00:34:29.324 "subsystem": "nbd", 00:34:29.324 "config": [] 00:34:29.324 } 00:34:29.324 ] 00:34:29.324 }' 00:34:29.324 11:02:56 keyring_file -- keyring/file.sh@115 -- # killprocess 2960301 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2960301 ']' 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2960301 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2960301 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2960301' 00:34:29.324 killing process with pid 2960301 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@971 -- # kill 2960301 00:34:29.324 Received shutdown signal, test time was about 1.000000 seconds 00:34:29.324 00:34:29.324 Latency(us) 00:34:29.324 [2024-11-07T10:02:56.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:29.324 [2024-11-07T10:02:56.995Z] =================================================================================================================== 00:34:29.324 [2024-11-07T10:02:56.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:29.324 11:02:56 keyring_file -- common/autotest_common.sh@976 -- # wait 2960301 00:34:29.584 11:02:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=2961817 00:34:29.584 11:02:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2961817 /var/tmp/bperf.sock 00:34:29.584 11:02:57 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2961817 ']' 00:34:29.584 11:02:57 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:29.584 11:02:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:34:29.584 11:02:57 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:29.584 11:02:57 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:29.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:29.584 11:02:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:34:29.584 "subsystems": [ 00:34:29.584 { 00:34:29.584 "subsystem": "keyring", 00:34:29.584 "config": [ 00:34:29.584 { 00:34:29.584 "method": "keyring_file_add_key", 00:34:29.584 "params": { 00:34:29.584 "name": "key0", 00:34:29.584 "path": "/tmp/tmp.nMXQaGoX6m" 00:34:29.584 } 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "method": "keyring_file_add_key", 00:34:29.584 "params": { 00:34:29.584 "name": "key1", 00:34:29.584 "path": "/tmp/tmp.VcdiNo7w4h" 00:34:29.584 } 00:34:29.584 } 00:34:29.584 ] 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "subsystem": "iobuf", 00:34:29.584 "config": [ 00:34:29.584 { 00:34:29.584 "method": "iobuf_set_options", 00:34:29.584 "params": { 00:34:29.584 "small_pool_count": 8192, 00:34:29.584 "large_pool_count": 1024, 00:34:29.584 "small_bufsize": 8192, 00:34:29.584 "large_bufsize": 135168, 00:34:29.584 "enable_numa": false 00:34:29.584 } 00:34:29.584 } 00:34:29.584 ] 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "subsystem": "sock", 00:34:29.584 "config": [ 00:34:29.584 { 00:34:29.584 "method": "sock_set_default_impl", 00:34:29.584 "params": { 00:34:29.584 "impl_name": "posix" 00:34:29.584 } 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "method": "sock_impl_set_options", 00:34:29.584 "params": { 00:34:29.584 "impl_name": "ssl", 00:34:29.584 "recv_buf_size": 4096, 00:34:29.584 "send_buf_size": 4096, 00:34:29.584 "enable_recv_pipe": true, 00:34:29.584 "enable_quickack": false, 00:34:29.584 "enable_placement_id": 0, 00:34:29.584 "enable_zerocopy_send_server": true, 00:34:29.584 "enable_zerocopy_send_client": false, 00:34:29.584 "zerocopy_threshold": 0, 00:34:29.584 "tls_version": 0, 00:34:29.584 "enable_ktls": false 00:34:29.584 } 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "method": "sock_impl_set_options", 00:34:29.584 "params": { 00:34:29.584 "impl_name": "posix", 00:34:29.584 "recv_buf_size": 2097152, 00:34:29.584 "send_buf_size": 2097152, 00:34:29.584 "enable_recv_pipe": true, 00:34:29.584 "enable_quickack": false, 00:34:29.584 "enable_placement_id": 0, 00:34:29.584 "enable_zerocopy_send_server": true, 00:34:29.584 "enable_zerocopy_send_client": false, 00:34:29.584 "zerocopy_threshold": 0, 00:34:29.584 "tls_version": 0, 00:34:29.584 "enable_ktls": false 00:34:29.584 } 00:34:29.584 } 00:34:29.584 ] 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "subsystem": "vmd", 00:34:29.584 "config": [] 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "subsystem": "accel", 00:34:29.584 "config": [ 00:34:29.584 { 00:34:29.584 "method": "accel_set_options", 00:34:29.584 "params": { 00:34:29.584 "small_cache_size": 128, 00:34:29.584 "large_cache_size": 16, 00:34:29.584 "task_count": 2048, 00:34:29.584 "sequence_count": 2048, 00:34:29.584 "buf_count": 2048 00:34:29.584 } 00:34:29.584 } 00:34:29.584 ] 00:34:29.584 }, 00:34:29.584 { 00:34:29.584 "subsystem": "bdev", 00:34:29.584 "config": [ 00:34:29.584 { 00:34:29.584 "method": "bdev_set_options", 00:34:29.584 "params": { 00:34:29.584 "bdev_io_pool_size": 65535, 00:34:29.584 "bdev_io_cache_size": 256, 00:34:29.584 "bdev_auto_examine": true, 00:34:29.585 "iobuf_small_cache_size": 128, 00:34:29.585 "iobuf_large_cache_size": 16 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_raid_set_options", 00:34:29.585 "params": { 00:34:29.585 "process_window_size_kb": 1024, 00:34:29.585 "process_max_bandwidth_mb_sec": 0 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_iscsi_set_options", 00:34:29.585 "params": { 00:34:29.585 "timeout_sec": 30 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_nvme_set_options", 00:34:29.585 "params": { 00:34:29.585 "action_on_timeout": "none", 00:34:29.585 "timeout_us": 0, 00:34:29.585 "timeout_admin_us": 0, 00:34:29.585 "keep_alive_timeout_ms": 10000, 00:34:29.585 "arbitration_burst": 0, 00:34:29.585 "low_priority_weight": 0, 00:34:29.585 "medium_priority_weight": 0, 00:34:29.585 "high_priority_weight": 0, 00:34:29.585 "nvme_adminq_poll_period_us": 10000, 00:34:29.585 "nvme_ioq_poll_period_us": 0, 00:34:29.585 "io_queue_requests": 512, 00:34:29.585 "delay_cmd_submit": true, 00:34:29.585 "transport_retry_count": 4, 00:34:29.585 "bdev_retry_count": 3, 00:34:29.585 "transport_ack_timeout": 0, 00:34:29.585 "ctrlr_loss_timeout_sec": 0, 00:34:29.585 "reconnect_delay_sec": 0, 00:34:29.585 "fast_io_fail_timeout_sec": 0, 00:34:29.585 "disable_auto_failback": false, 00:34:29.585 "generate_uuids": false, 00:34:29.585 "transport_tos": 0, 00:34:29.585 "nvme_error_stat": false, 00:34:29.585 "rdma_srq_size": 0, 00:34:29.585 "io_path_stat": false, 00:34:29.585 "allow_accel_sequence": false, 00:34:29.585 "rdma_max_cq_size": 0, 00:34:29.585 "rdma_cm_event_timeout_ms": 0, 00:34:29.585 "dhchap_digests": [ 00:34:29.585 "sha256", 00:34:29.585 "sha384", 00:34:29.585 "sha512" 00:34:29.585 ], 00:34:29.585 "dhchap_dhgroups": [ 00:34:29.585 "null", 00:34:29.585 "ffdhe2048", 00:34:29.585 "ffdhe3072", 00:34:29.585 "ffdhe4096", 00:34:29.585 "ffdhe6144", 00:34:29.585 "ffdhe8192" 00:34:29.585 ] 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_nvme_attach_controller", 00:34:29.585 "params": { 00:34:29.585 "name": "nvme0", 00:34:29.585 "trtype": "TCP", 00:34:29.585 "adrfam": "IPv4", 00:34:29.585 "traddr": "127.0.0.1", 00:34:29.585 "trsvcid": "4420", 00:34:29.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.585 "prchk_reftag": false, 00:34:29.585 "prchk_guard": false, 00:34:29.585 "ctrlr_loss_timeout_sec": 0, 00:34:29.585 "reconnect_delay_sec": 0, 00:34:29.585 "fast_io_fail_timeout_sec": 0, 00:34:29.585 "psk": "key0", 00:34:29.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.585 "hdgst": false, 00:34:29.585 "ddgst": false, 00:34:29.585 "multipath": "multipath" 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_nvme_set_hotplug", 00:34:29.585 "params": { 00:34:29.585 "period_us": 100000, 00:34:29.585 "enable": false 00:34:29.585 } 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "method": "bdev_wait_for_examine" 00:34:29.585 } 00:34:29.585 ] 00:34:29.585 }, 00:34:29.585 { 00:34:29.585 "subsystem": "nbd", 00:34:29.585 "config": [] 00:34:29.585 } 00:34:29.585 ] 00:34:29.585 }' 00:34:29.585 11:02:57 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:29.585 11:02:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:29.585 [2024-11-07 11:02:57.097559] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:34:29.585 [2024-11-07 11:02:57.097609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2961817 ] 00:34:29.585 [2024-11-07 11:02:57.159272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:29.585 [2024-11-07 11:02:57.200009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:29.844 [2024-11-07 11:02:57.361298] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:30.411 11:02:57 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:30.411 11:02:57 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:34:30.411 11:02:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:34:30.411 11:02:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.411 11:02:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:34:30.669 11:02:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:34:30.670 11:02:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.670 11:02:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:34:30.670 11:02:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:30.670 11:02:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:34:30.928 11:02:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:34:30.928 11:02:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:34:30.928 11:02:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:34:30.928 11:02:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:34:31.187 11:02:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:34:31.187 11:02:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:34:31.187 11:02:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nMXQaGoX6m /tmp/tmp.VcdiNo7w4h 00:34:31.187 11:02:58 keyring_file -- keyring/file.sh@20 -- # killprocess 2961817 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2961817 ']' 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2961817 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2961817 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2961817' 00:34:31.187 killing process with pid 2961817 00:34:31.187 11:02:58 keyring_file -- common/autotest_common.sh@971 -- # kill 2961817 00:34:31.187 Received shutdown signal, test time was about 1.000000 seconds 00:34:31.187 00:34:31.187 Latency(us) 00:34:31.187 [2024-11-07T10:02:58.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.187 [2024-11-07T10:02:58.858Z] =================================================================================================================== 00:34:31.187 [2024-11-07T10:02:58.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:31.188 11:02:58 keyring_file -- common/autotest_common.sh@976 -- # wait 2961817 00:34:31.447 11:02:58 keyring_file -- keyring/file.sh@21 -- # killprocess 2960126 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2960126 ']' 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2960126 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@957 -- # uname 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2960126 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2960126' 00:34:31.447 killing process with pid 2960126 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@971 -- # kill 2960126 00:34:31.447 11:02:58 keyring_file -- common/autotest_common.sh@976 -- # wait 2960126 00:34:31.706 00:34:31.706 real 0m11.835s 00:34:31.706 user 0m29.387s 00:34:31.706 sys 0m2.715s 00:34:31.706 11:02:59 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:31.706 11:02:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:34:31.706 ************************************ 00:34:31.706 END TEST keyring_file 00:34:31.706 ************************************ 00:34:31.706 11:02:59 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:34:31.706 11:02:59 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:31.706 11:02:59 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:31.706 11:02:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:31.706 11:02:59 -- common/autotest_common.sh@10 -- # set +x 00:34:31.706 ************************************ 00:34:31.706 START TEST keyring_linux 00:34:31.706 ************************************ 00:34:31.706 11:02:59 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:34:31.706 Joined session keyring: 968506227 00:34:31.966 * Looking for test storage... 00:34:31.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.966 11:02:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.966 --rc genhtml_branch_coverage=1 00:34:31.966 --rc genhtml_function_coverage=1 00:34:31.966 --rc genhtml_legend=1 00:34:31.966 --rc geninfo_all_blocks=1 00:34:31.966 --rc geninfo_unexecuted_blocks=1 00:34:31.966 00:34:31.966 ' 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.966 --rc genhtml_branch_coverage=1 00:34:31.966 --rc genhtml_function_coverage=1 00:34:31.966 --rc genhtml_legend=1 00:34:31.966 --rc geninfo_all_blocks=1 00:34:31.966 --rc geninfo_unexecuted_blocks=1 00:34:31.966 00:34:31.966 ' 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.966 --rc genhtml_branch_coverage=1 00:34:31.966 --rc genhtml_function_coverage=1 00:34:31.966 --rc genhtml_legend=1 00:34:31.966 --rc geninfo_all_blocks=1 00:34:31.966 --rc geninfo_unexecuted_blocks=1 00:34:31.966 00:34:31.966 ' 00:34:31.966 11:02:59 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:31.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.966 --rc genhtml_branch_coverage=1 00:34:31.966 --rc genhtml_function_coverage=1 00:34:31.966 --rc genhtml_legend=1 00:34:31.966 --rc geninfo_all_blocks=1 00:34:31.966 --rc geninfo_unexecuted_blocks=1 00:34:31.966 00:34:31.966 ' 00:34:31.966 11:02:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:34:31.966 11:02:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.966 11:02:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.967 11:02:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.967 11:02:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.967 11:02:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.967 11:02:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.967 11:02:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.967 11:02:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.967 11:02:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.967 11:02:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:34:31.967 11:02:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:31.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:34:31.967 /tmp/:spdk-test:key0 00:34:31.967 11:02:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:34:31.967 11:02:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:34:31.967 11:02:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:34:32.226 11:02:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:34:32.226 11:02:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:34:32.226 /tmp/:spdk-test:key1 00:34:32.226 11:02:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2962315 00:34:32.226 11:02:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2962315 00:34:32.226 11:02:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2962315 ']' 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:32.226 11:02:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:32.226 [2024-11-07 11:02:59.703507] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:34:32.226 [2024-11-07 11:02:59.703557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962315 ] 00:34:32.226 [2024-11-07 11:02:59.766672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.226 [2024-11-07 11:02:59.807038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:32.486 [2024-11-07 11:03:00.024733] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.486 null0 00:34:32.486 [2024-11-07 11:03:00.056768] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:34:32.486 [2024-11-07 11:03:00.057151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:34:32.486 44390882 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:34:32.486 216003124 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2962393 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2962393 /var/tmp/bperf.sock 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2962393 ']' 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:32.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:32.486 11:03:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:32.486 11:03:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:34:32.486 [2024-11-07 11:03:00.129027] Starting SPDK v25.01-pre git sha1 899af6c35 / DPDK 24.03.0 initialization... 00:34:32.486 [2024-11-07 11:03:00.129073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2962393 ] 00:34:32.745 [2024-11-07 11:03:00.195183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.745 [2024-11-07 11:03:00.237930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.745 11:03:00 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:32.745 11:03:00 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:34:32.745 11:03:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:34:32.745 11:03:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:34:33.004 11:03:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:34:33.004 11:03:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:33.263 11:03:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:33.263 11:03:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:34:33.263 [2024-11-07 11:03:00.898378] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:34:33.522 nvme0n1 00:34:33.523 11:03:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:34:33.523 11:03:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:34:33.523 11:03:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:33.523 11:03:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:33.523 11:03:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:33.523 11:03:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:34:33.781 11:03:01 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:34:33.781 11:03:01 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:34:33.781 11:03:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@25 -- # sn=44390882 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 44390882 == \4\4\3\9\0\8\8\2 ]] 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 44390882 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:34:33.781 11:03:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:34.040 Running I/O for 1 seconds... 00:34:34.977 20142.00 IOPS, 78.68 MiB/s 00:34:34.977 Latency(us) 00:34:34.977 [2024-11-07T10:03:02.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:34.977 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:34.977 nvme0n1 : 1.01 20141.98 78.68 0.00 0.00 6332.96 5356.86 12594.31 00:34:34.977 [2024-11-07T10:03:02.648Z] =================================================================================================================== 00:34:34.977 [2024-11-07T10:03:02.648Z] Total : 20141.98 78.68 0.00 0.00 6332.96 5356.86 12594.31 00:34:34.977 { 00:34:34.977 "results": [ 00:34:34.977 { 00:34:34.977 "job": "nvme0n1", 00:34:34.977 "core_mask": "0x2", 00:34:34.977 "workload": "randread", 00:34:34.977 "status": "finished", 00:34:34.977 "queue_depth": 128, 00:34:34.977 "io_size": 4096, 00:34:34.977 "runtime": 1.006356, 00:34:34.977 "iops": 20141.977590435195, 00:34:34.977 "mibps": 78.67959996263748, 00:34:34.977 "io_failed": 0, 00:34:34.977 "io_timeout": 0, 00:34:34.977 "avg_latency_us": 6332.964054481886, 00:34:34.977 "min_latency_us": 5356.855652173913, 00:34:34.977 "max_latency_us": 12594.30956521739 00:34:34.977 } 00:34:34.977 ], 00:34:34.977 "core_count": 1 00:34:34.977 } 00:34:34.977 11:03:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:34:34.977 11:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:34:35.237 11:03:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:34:35.237 11:03:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:34:35.237 11:03:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:34:35.237 11:03:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:34:35.237 11:03:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:34:35.237 11:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:34:35.496 11:03:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:34:35.496 11:03:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:34:35.496 11:03:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:34:35.496 11:03:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:35.496 11:03:02 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:35.496 11:03:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:34:35.496 [2024-11-07 11:03:03.115849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:34:35.496 [2024-11-07 11:03:03.116564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8f80 (107): Transport endpoint is not connected 00:34:35.496 [2024-11-07 11:03:03.117558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8f80 (9): Bad file descriptor 00:34:35.496 [2024-11-07 11:03:03.118560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:34:35.496 [2024-11-07 11:03:03.118569] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:34:35.496 [2024-11-07 11:03:03.118577] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:34:35.496 [2024-11-07 11:03:03.118585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:34:35.496 request: 00:34:35.496 { 00:34:35.496 "name": "nvme0", 00:34:35.496 "trtype": "tcp", 00:34:35.496 "traddr": "127.0.0.1", 00:34:35.496 "adrfam": "ipv4", 00:34:35.496 "trsvcid": "4420", 00:34:35.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:35.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:35.496 "prchk_reftag": false, 00:34:35.496 "prchk_guard": false, 00:34:35.496 "hdgst": false, 00:34:35.496 "ddgst": false, 00:34:35.496 "psk": ":spdk-test:key1", 00:34:35.496 "allow_unrecognized_csi": false, 00:34:35.496 "method": "bdev_nvme_attach_controller", 00:34:35.496 "req_id": 1 00:34:35.496 } 00:34:35.496 Got JSON-RPC error response 00:34:35.496 response: 00:34:35.496 { 00:34:35.496 "code": -5, 00:34:35.496 "message": "Input/output error" 00:34:35.496 } 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@33 -- # sn=44390882 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 44390882 00:34:35.496 1 links removed 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@33 -- # sn=216003124 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 216003124 00:34:35.496 1 links removed 00:34:35.496 11:03:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2962393 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2962393 ']' 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2962393 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:35.496 11:03:03 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2962393 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2962393' 00:34:35.755 killing process with pid 2962393 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@971 -- # kill 2962393 00:34:35.755 Received shutdown signal, test time was about 1.000000 seconds 00:34:35.755 00:34:35.755 Latency(us) 00:34:35.755 [2024-11-07T10:03:03.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:35.755 [2024-11-07T10:03:03.426Z] =================================================================================================================== 00:34:35.755 [2024-11-07T10:03:03.426Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@976 -- # wait 2962393 00:34:35.755 11:03:03 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2962315 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2962315 ']' 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2962315 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2962315 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2962315' 00:34:35.755 killing process with pid 2962315 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@971 -- # kill 2962315 00:34:35.755 11:03:03 keyring_linux -- common/autotest_common.sh@976 -- # wait 2962315 00:34:36.324 00:34:36.324 real 0m4.354s 00:34:36.324 user 0m8.125s 00:34:36.324 sys 0m1.468s 00:34:36.324 11:03:03 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:36.324 11:03:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:34:36.324 ************************************ 00:34:36.324 END TEST keyring_linux 00:34:36.324 ************************************ 00:34:36.324 11:03:03 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:36.324 11:03:03 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:34:36.324 11:03:03 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:36.324 11:03:03 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:36.324 11:03:03 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:34:36.324 11:03:03 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:34:36.324 11:03:03 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:34:36.324 11:03:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:36.324 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:34:36.324 11:03:03 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:34:36.324 11:03:03 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:34:36.324 11:03:03 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:34:36.324 11:03:03 -- common/autotest_common.sh@10 -- # set +x 00:34:40.506 INFO: APP EXITING 00:34:40.506 INFO: killing all VMs 00:34:40.506 INFO: killing vhost app 00:34:40.506 INFO: EXIT DONE 00:34:43.033 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:43.033 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:43.033 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:46.324 Cleaning 00:34:46.324 Removing: /var/run/dpdk/spdk0/config 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:46.324 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:46.324 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:46.324 Removing: /var/run/dpdk/spdk1/config 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:46.324 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:46.324 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:46.324 Removing: /var/run/dpdk/spdk2/config 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:46.324 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:46.324 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:46.324 Removing: /var/run/dpdk/spdk3/config 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:46.324 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:46.324 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:46.324 Removing: /var/run/dpdk/spdk4/config 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:46.324 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:46.324 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:46.324 Removing: /dev/shm/bdev_svc_trace.1 00:34:46.324 Removing: /dev/shm/nvmf_trace.0 00:34:46.324 Removing: /dev/shm/spdk_tgt_trace.pid2490219 00:34:46.324 Removing: /var/run/dpdk/spdk0 00:34:46.324 Removing: /var/run/dpdk/spdk1 00:34:46.324 Removing: /var/run/dpdk/spdk2 00:34:46.324 Removing: /var/run/dpdk/spdk3 00:34:46.324 Removing: /var/run/dpdk/spdk4 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2488081 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2489132 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2490219 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2490854 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2491782 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2491818 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2492790 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2493011 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2493196 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2494881 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2496169 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2496464 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2496751 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2497056 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2497346 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2497514 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2497672 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2497998 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2498714 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2502205 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2502432 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2502686 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2502689 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2503184 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2503195 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2503683 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2503692 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2503948 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2504070 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2504216 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2504417 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2504794 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2505044 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2505345 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2509056 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2513305 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2523295 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2523856 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2528127 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2528391 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2532643 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2538522 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2541129 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2551851 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2560554 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2562391 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2563313 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2579966 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2583867 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2628832 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2634011 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2639773 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2646322 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2646412 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2647183 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2648189 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2649147 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2649996 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2650113 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2650415 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2650474 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2650476 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2651389 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2652304 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2653160 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2653695 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2653698 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2653935 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2655004 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2656050 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2664226 00:34:46.324 Removing: /var/run/dpdk/spdk_pid2693004 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2697545 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2699255 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2700923 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2701115 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2701344 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2701364 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2701865 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2703700 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2704468 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2704958 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2707077 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2707571 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2708178 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2712340 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2717728 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2717729 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2717730 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2721500 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2730339 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2734166 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2740325 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2741468 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2742811 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2744326 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2748805 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2753136 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2757156 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2764315 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2764421 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2769018 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2769246 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2769473 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2769930 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2769938 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2774407 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2774896 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2779413 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2782368 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2787693 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2793045 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2801662 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2808449 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2808502 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2827386 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2828066 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2828645 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2829121 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2829860 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2830339 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2831006 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2831497 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2835529 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2835760 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2841745 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2841892 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2847348 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2851360 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2861090 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2861777 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2865806 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2866055 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2870295 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2876474 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2879059 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2888759 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2897330 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2899024 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2899941 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2915838 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2919640 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2922857 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2930547 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2930584 00:34:46.325 Removing: /var/run/dpdk/spdk_pid2935388 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2937354 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2939320 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2940383 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2942487 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2943621 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2952136 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2952606 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2953065 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2955324 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2955801 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2956345 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2960126 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2960301 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2961817 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2962315 00:34:46.583 Removing: /var/run/dpdk/spdk_pid2962393 00:34:46.583 Clean 00:34:46.583 11:03:14 -- common/autotest_common.sh@1451 -- # return 0 00:34:46.583 11:03:14 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:34:46.583 11:03:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.583 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:34:46.583 11:03:14 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:34:46.583 11:03:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:46.583 11:03:14 -- common/autotest_common.sh@10 -- # set +x 00:34:46.583 11:03:14 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:46.583 11:03:14 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:46.583 11:03:14 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:46.583 11:03:14 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:34:46.583 11:03:14 -- spdk/autotest.sh@394 -- # hostname 00:34:46.583 11:03:14 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:46.841 geninfo: WARNING: invalid characters removed from testname! 00:35:08.759 11:03:35 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:10.659 11:03:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:12.558 11:03:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:14.458 11:03:41 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:16.361 11:03:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:18.261 11:03:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:20.179 11:03:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:20.179 11:03:47 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:20.179 11:03:47 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:35:20.179 11:03:47 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:20.179 11:03:47 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:20.179 11:03:47 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:20.179 + [[ -n 2410656 ]] 00:35:20.179 + sudo kill 2410656 00:35:20.249 [Pipeline] } 00:35:20.268 [Pipeline] // stage 00:35:20.272 [Pipeline] } 00:35:20.287 [Pipeline] // timeout 00:35:20.292 [Pipeline] } 00:35:20.306 [Pipeline] // catchError 00:35:20.311 [Pipeline] } 00:35:20.326 [Pipeline] // wrap 00:35:20.332 [Pipeline] } 00:35:20.346 [Pipeline] // catchError 00:35:20.357 [Pipeline] stage 00:35:20.360 [Pipeline] { (Epilogue) 00:35:20.374 [Pipeline] catchError 00:35:20.375 [Pipeline] { 00:35:20.388 [Pipeline] echo 00:35:20.390 Cleanup processes 00:35:20.397 [Pipeline] sh 00:35:20.749 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:20.749 2973197 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:20.763 [Pipeline] sh 00:35:21.048 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:35:21.048 ++ grep -v 'sudo pgrep' 00:35:21.048 ++ awk '{print $1}' 00:35:21.048 + sudo kill -9 00:35:21.048 + true 00:35:21.059 [Pipeline] sh 00:35:21.343 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:33.555 [Pipeline] sh 00:35:33.838 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:33.838 Artifacts sizes are good 00:35:33.852 [Pipeline] archiveArtifacts 00:35:33.859 Archiving artifacts 00:35:33.996 [Pipeline] sh 00:35:34.280 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:34.293 [Pipeline] cleanWs 00:35:34.302 [WS-CLEANUP] Deleting project workspace... 00:35:34.303 [WS-CLEANUP] Deferred wipeout is used... 00:35:34.309 [WS-CLEANUP] done 00:35:34.311 [Pipeline] } 00:35:34.327 [Pipeline] // catchError 00:35:34.338 [Pipeline] sh 00:35:34.614 + logger -p user.info -t JENKINS-CI 00:35:34.622 [Pipeline] } 00:35:34.635 [Pipeline] // stage 00:35:34.640 [Pipeline] } 00:35:34.654 [Pipeline] // node 00:35:34.659 [Pipeline] End of Pipeline 00:35:34.693 Finished: SUCCESS